2025-12-04T10:14:22.4476148Z Current runner version: '2.329.0' 2025-12-04T10:14:22.4479342Z Runner name: 'linux.rocm.gpu.gfx942.4.b-bphpw-runner-mcn25' 2025-12-04T10:14:22.4479753Z Runner group name: 'default' 2025-12-04T10:14:22.4480169Z Machine name: 'linux' 2025-12-04T10:14:22.4481373Z ##[group]GITHUB_TOKEN Permissions 2025-12-04T10:14:22.4482517Z Contents: read 2025-12-04T10:14:22.4482777Z Metadata: read 2025-12-04T10:14:22.4483019Z ##[endgroup] 2025-12-04T10:14:22.4484097Z Secret source: Actions 2025-12-04T10:14:22.4484411Z Prepare workflow directory 2025-12-04T10:14:22.4726148Z Prepare all required actions 2025-12-04T10:14:22.4745808Z Getting action download info 2025-12-04T10:14:23.0250059Z Download action repository 'pytorch/pytorch@main' (SHA:c0cb6e78404416d418350632bfc554710a5f7281) 2025-12-04T10:14:27.3714361Z Download action repository 'pytorch/test-infra@main' (SHA:39aa74d619174326f4e2fb0e216151c2f29d9ffd) 2025-12-04T10:14:28.8919891Z Download action repository 'actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02' (SHA:ea165f8d65b6e75b540449e92b4886f43607fa02) 2025-12-04T10:14:30.0627417Z Download action repository 'aws-actions/configure-aws-credentials@ececac1a45f3b08a01d2dd070d28d111c5fe6722' (SHA:ececac1a45f3b08a01d2dd070d28d111c5fe6722) 2025-12-04T10:14:31.2176292Z Getting action download info 2025-12-04T10:14:31.4434936Z Download action repository 'actions/checkout@v4' (SHA:34e114876b0b11c390a56381ad16ebd13914f8d5) 2025-12-04T10:14:32.5000936Z Getting action download info 2025-12-04T10:14:32.7321796Z Download action repository 'nick-fields/retry@v3.0.0' (SHA:7152eba30c6575329ac0576536151aca5a72780e) 2025-12-04T10:14:33.6197838Z Getting action download info 2025-12-04T10:14:33.8236606Z Uses: pytorch/pytorch/.github/workflows/_rocm-test.yml@refs/heads/main (ffd9b0fb4355e97af82fc42cf185c3ffa0fc0a32) 2025-12-04T10:14:33.8243407Z ##[group] Inputs 2025-12-04T10:14:33.8243879Z build-environment: linux-jammy-rocm-py3.10 2025-12-04T10:14:33.8256418Z test-matrix: {"include": [{"config": "default", "shard": 1, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "default", "shard": 1, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "default", "shard": 2, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "default", "shard": 2, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "default", "shard": 3, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "default", "shard": 3, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "default", "shard": 4, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "default", "shard": 4, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "default", "shard": 5, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "default", "shard": 5, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "default", "shard": 6, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "default", "shard": 6, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "distributed", "shard": 1, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "distributed", "shard": 1, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "distributed", "shard": 2, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "distributed", "shard": 2, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "distributed", "shard": 3, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "distributed", "shard": 3, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}]} 2025-12-04T10:14:33.8267221Z docker-image: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-rocm-n-py3-f0cd68561080d537ef3d3d6f81b25a6416ad600a 2025-12-04T10:14:33.8268130Z sync-tag: 2025-12-04T10:14:33.8269435Z timeout-minutes: 300 2025-12-04T10:14:33.8269771Z tests-to-include: 2025-12-04T10:14:33.8270079Z dashboard-tag: 2025-12-04T10:14:33.8270835Z disable-monitor: true 2025-12-04T10:14:33.8271196Z monitor-log-interval: 5 2025-12-04T10:14:33.8271581Z monitor-data-collect-interval: 1 2025-12-04T10:14:33.8271976Z ##[endgroup] 2025-12-04T10:14:33.8272640Z Complete job name: linux-jammy-rocm-py3.10 / test (distributed, 1, 3, linux.rocm.gpu.gfx942.4.b, mem_leak_check, unstable) 2025-12-04T10:14:33.8681496Z ##[group]Run pytorch/pytorch/.github/actions/checkout-pytorch@main 2025-12-04T10:14:33.8681777Z with: 2025-12-04T10:14:33.8681864Z no-sudo: true 2025-12-04T10:14:33.8681960Z submodules: recursive 2025-12-04T10:14:33.8682058Z fetch-depth: 0 2025-12-04T10:14:33.8682186Z env: 2025-12-04T10:14:33.8682282Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:14:33.8682391Z ##[endgroup] 2025-12-04T10:14:33.8805553Z ##[group]Run echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT" 2025-12-04T10:14:33.8805913Z echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT" 2025-12-04T10:14:33.8812333Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-12-04T10:14:33.8812486Z env: 2025-12-04T10:14:33.8812574Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:14:33.8812672Z ##[endgroup] 2025-12-04T10:14:33.9122237Z ##[group]Run actions/checkout@v4 2025-12-04T10:14:33.9122674Z with: 2025-12-04T10:14:33.9123022Z ref: ffd9b0fb4355e97af82fc42cf185c3ffa0fc0a32 2025-12-04T10:14:33.9123427Z fetch-depth: 0 2025-12-04T10:14:33.9123728Z submodules: recursive 2025-12-04T10:14:33.9124173Z show-progress: false 2025-12-04T10:14:33.9124501Z repository: pytorch/pytorch 2025-12-04T10:14:33.9125057Z token: *** 2025-12-04T10:14:33.9125333Z ssh-strict: true 2025-12-04T10:14:33.9125608Z ssh-user: git 2025-12-04T10:14:33.9125909Z persist-credentials: true 2025-12-04T10:14:33.9126236Z clean: true 2025-12-04T10:14:33.9126552Z sparse-checkout-cone-mode: true 2025-12-04T10:14:33.9126920Z fetch-tags: false 2025-12-04T10:14:33.9127196Z lfs: false 2025-12-04T10:14:33.9127478Z set-safe-directory: true 2025-12-04T10:14:33.9127791Z env: 2025-12-04T10:14:33.9128060Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:14:33.9128373Z ##[endgroup] 2025-12-04T10:14:33.9823771Z Syncing repository: pytorch/pytorch 2025-12-04T10:14:33.9824967Z ##[group]Getting Git version info 2025-12-04T10:14:33.9825307Z Working directory is '/home/runner/_work/pytorch/pytorch' 2025-12-04T10:14:33.9825760Z [command]/usr/bin/git version 2025-12-04T10:14:33.9826027Z git version 2.52.0 2025-12-04T10:14:33.9826753Z ##[endgroup] 2025-12-04T10:14:33.9830018Z Copying '/home/runner/.gitconfig' to '/home/runner/_work/_temp/ea5eef63-c51c-4d52-9e5f-ff839d3d979a/.gitconfig' 2025-12-04T10:14:33.9830799Z Temporarily overriding HOME='/home/runner/_work/_temp/ea5eef63-c51c-4d52-9e5f-ff839d3d979a' before making global git config changes 2025-12-04T10:14:33.9831627Z Adding repository directory to the temporary git global config as a safe directory 2025-12-04T10:14:33.9832150Z [command]/usr/bin/git config --global --add safe.directory /home/runner/_work/pytorch/pytorch 2025-12-04T10:14:33.9838560Z [command]/usr/bin/git config --local --get remote.origin.url 2025-12-04T10:14:33.9864637Z https://github.com/pytorch/pytorch 2025-12-04T10:14:33.9874323Z ##[group]Removing previously created refs, to avoid conflicts 2025-12-04T10:14:33.9875800Z [command]/usr/bin/git rev-parse --symbolic-full-name --verify --quiet HEAD 2025-12-04T10:14:33.9891330Z refs/heads/main 2025-12-04T10:14:33.9903286Z [command]/usr/bin/git checkout --detach 2025-12-04T10:14:35.7589566Z HEAD is now at c0cb6e784044 [DTensor] ExplicitRedistributionContext warning mode (#169452) 2025-12-04T10:14:35.7660022Z [command]/usr/bin/git branch --delete --force main 2025-12-04T10:14:35.7843309Z Deleted branch main (was c0cb6e784044). 2025-12-04T10:14:35.7851100Z ##[endgroup] 2025-12-04T10:14:35.7858371Z [command]/usr/bin/git submodule status 2025-12-04T10:14:35.8163121Z 7e1e1fe3858c63c251c637ae41a20de425dde96f android/libs/fbjni (v0.1.0-12-g7e1e1fe) 2025-12-04T10:14:35.8231647Z 4dfe081cf6bcd15db339cf2680b9281b8451eeb3 third_party/FP16 (4dfe081) 2025-12-04T10:14:35.8328448Z b408327ac2a15ec3e43352421954f5b1967701d1 third_party/FXdiv (b408327) 2025-12-04T10:14:35.8416328Z c07e3a0400713d546e0dea2d5466dd22ea389c73 third_party/NNPACK (c07e3a0) 2025-12-04T10:14:35.8446874Z 3ebbc93ded7285963bff932c678fa367eb393ba6 third_party/NVTX (v3.1.0-313-g3ebbc93) 2025-12-04T10:14:35.8522901Z 1d8f600fd424278486eade7ed3e877c99f0846b1 third_party/VulkanMemoryAllocator (v2.1.0-982-g1d8f600) 2025-12-04T10:14:35.8879181Z 51a0103656eff6fc9bfd39a4597923c4b542c883 third_party/XNNPACK (remotes/origin/ds/ndk-1243-g51a0103656) 2025-12-04T10:14:35.8931286Z 01aae101b9e5e94d6c16a9514c9fb8df99c93150 third_party/aiter (v0.1.1-92-g01aae101) 2025-12-04T10:14:35.8957907Z 299e5928955cc62af9968370293b916f5130916f third_party/benchmark (v1.9.3) 2025-12-04T10:14:35.9030125Z 7fe50dc3da2069d6645d9deb8c017a876472a977 third_party/composable_kernel (rocm-6.4.3-459-g7fe50dc3d) 2025-12-04T10:14:35.9153131Z 89c932f313c6437c38f2982869beacc89c2f2246 third_party/cpp-httplib (v0.26.0) 2025-12-04T10:14:35.9295307Z f858c30bcb16f8effd5ff46996f0514539e17abc third_party/cpuinfo (f858c30) 2025-12-04T10:14:35.9362479Z 0b1577c8c83401237d601d0d0db5210506705396 third_party/cudnn_frontend (v0.5-61-g0b1577c) 2025-12-04T10:14:35.9464951Z f88806b1e31dfa579842638740216dd41fc6c588 third_party/cutlass (v4.3.1) 2025-12-04T10:14:35.9506330Z c0b988d39a9e47c794d699f29930ed4d7c7e13a4 third_party/fbgemm (v1.4.0-rc1-2-gc0b988d39) 2025-12-04T10:14:35.9588331Z 979702c87a8713a8e0a5e9fee122b90d2ef13be5 third_party/flash-attention (v2.7.4) 2025-12-04T10:14:35.9629920Z a2cd1ea3b6d3fee220106b5fed3f7ce8da9eb757 third_party/flatbuffers (v24.12.23) 2025-12-04T10:14:35.9914133Z 407c905e45ad75fc29bf0f9bb7c5c2fd3475976f third_party/fmt (12.1.0) 2025-12-04T10:14:36.0016792Z 3fb5c176c17c765a3492cd2f0321b0dab712f350 third_party/gemmlowp/gemmlowp (remotes/origin/revert-87-master-135-g3fb5c17) 2025-12-04T10:14:36.0168468Z 54cbae0d3a67fa890b4c3d9ee162b7860315e341 third_party/gloo (remotes/origin/gh/c-p-i-o/1/base-37-g54cbae0) 2025-12-04T10:14:36.0327825Z 52eb8108c5bdec04579160ae17225d66034bd723 third_party/googletest (release-1.8.0-3544-g52eb8108) 2025-12-04T10:14:36.0419925Z 719d8e6cd7f7a0e01b155657526d693acf97c2b3 third_party/ideep (pytorch-rls-v3.7.1) 2025-12-04T10:14:36.0500355Z dec1d23ca65ab069d225dfe40dea14f455170959 third_party/ittapi (v3.25.5) 2025-12-04T10:14:36.0672463Z 31f85df8fbd89c188f14ef10f1ec65379786b943 third_party/kineto (heads/main) 2025-12-04T10:14:36.0715236Z d7770c89632329a9914ef1a90289917597639cbe third_party/kleidiai (v1.15.0) 2025-12-04T10:14:36.0751439Z fbd8b99c2b828428947d70fdc046bb55609be93e third_party/mimalloc (v2.2.4) 2025-12-04T10:14:36.0787581Z 55f93686c01528224f448c19128836e7df245f72 third_party/nlohmann (v3.12.0) 2025-12-04T10:14:36.1004982Z e709452ef2bbc1d113faf678c24e6d3467696e83 third_party/onnx (v1.18.0) 2025-12-04T10:14:36.1037763Z a799f4aed9c94b765dcdaabaeab7d5e7e2310878 third_party/opentelemetry-cpp (v1.14.2) 2025-12-04T10:14:36.1081389Z 0fa0ef591e38c2758e3184c6c23e497b9f732ffa third_party/pocketfft (release_for_eigen-40-g0fa0ef5) 2025-12-04T10:14:36.1306675Z d1eca4e4b421cd2997495c4b4e65cea6be4e9b8a third_party/protobuf (v3.7.0-rc.2-1279-gd1eca4e4b) 2025-12-04T10:14:36.1400076Z 072586a71b55b7f8c584153d223e95687148a900 third_party/psimd (heads/master) 2025-12-04T10:14:36.1480596Z 4fe0e1e183925bf8cfa6aae24237e724a96479b8 third_party/pthreadpool (0.1-144-g4fe0e1e) 2025-12-04T10:14:36.1521935Z f5fbe867d2d26e4a0a9177a51f6e568868ad3dc8 third_party/pybind11 (v3.0.1) 2025-12-04T10:14:36.1614285Z f45429b087dd7d5bc78bb40dc7cf06425c252d67 third_party/python-peachpy (remotes/origin/pre-generated) 2025-12-04T10:14:36.1706282Z 5a1d179df9cf652951b59010a2d2075372d67f68 third_party/sleef (3.8) 2025-12-04T10:14:36.1796986Z 2b4cd91092d335a697416b2a3cb398283246849d third_party/tensorpipe (heads/main) 2025-12-04T10:14:36.1812358Z ##[group]Cleaning the repository 2025-12-04T10:14:36.1816549Z [command]/usr/bin/git clean -ffdx 2025-12-04T10:14:36.1944134Z [command]/usr/bin/git reset --hard HEAD 2025-12-04T10:14:36.2696395Z HEAD is now at c0cb6e784044 [DTensor] ExplicitRedistributionContext warning mode (#169452) 2025-12-04T10:14:36.2763440Z ##[endgroup] 2025-12-04T10:14:36.2767336Z ##[group]Disabling automatic garbage collection 2025-12-04T10:14:36.2772719Z [command]/usr/bin/git config --local gc.auto 0 2025-12-04T10:14:36.2816141Z ##[endgroup] 2025-12-04T10:14:36.2816690Z ##[group]Setting up auth 2025-12-04T10:14:36.2826602Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand 2025-12-04T10:14:36.2860100Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :" 2025-12-04T10:14:36.3188391Z Entering 'android/libs/fbjni' 2025-12-04T10:14:36.3238003Z Entering 'third_party/FP16' 2025-12-04T10:14:36.3276784Z Entering 'third_party/FXdiv' 2025-12-04T10:14:36.3316611Z Entering 'third_party/NNPACK' 2025-12-04T10:14:36.3364638Z Entering 'third_party/NVTX' 2025-12-04T10:14:36.3398835Z Entering 'third_party/VulkanMemoryAllocator' 2025-12-04T10:14:36.3423352Z Entering 'third_party/XNNPACK' 2025-12-04T10:14:36.3476700Z Entering 'third_party/aiter' 2025-12-04T10:14:36.3504540Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-12-04T10:14:36.3532559Z Entering 'third_party/benchmark' 2025-12-04T10:14:36.3555450Z Entering 'third_party/composable_kernel' 2025-12-04T10:14:36.3581093Z Entering 'third_party/cpp-httplib' 2025-12-04T10:14:36.3615481Z Entering 'third_party/cpuinfo' 2025-12-04T10:14:36.3657305Z Entering 'third_party/cudnn_frontend' 2025-12-04T10:14:36.3696043Z Entering 'third_party/cutlass' 2025-12-04T10:14:36.3723327Z Entering 'third_party/fbgemm' 2025-12-04T10:14:36.3787411Z Entering 'third_party/fbgemm/external/asmjit' 2025-12-04T10:14:36.3826802Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-12-04T10:14:36.3872309Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-12-04T10:14:36.3898777Z Entering 'third_party/fbgemm/external/cutlass' 2025-12-04T10:14:36.3949547Z Entering 'third_party/fbgemm/external/googletest' 2025-12-04T10:14:36.3986463Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-12-04T10:14:36.4028788Z Entering 'third_party/fbgemm/external/json' 2025-12-04T10:14:36.4056002Z Entering 'third_party/flash-attention' 2025-12-04T10:14:36.4108587Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-12-04T10:14:36.4147977Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-12-04T10:14:36.4179896Z Entering 'third_party/flatbuffers' 2025-12-04T10:14:36.4208558Z Entering 'third_party/fmt' 2025-12-04T10:14:36.4244013Z Entering 'third_party/gemmlowp/gemmlowp' 2025-12-04T10:14:36.4277191Z Entering 'third_party/gloo' 2025-12-04T10:14:36.4305700Z Entering 'third_party/googletest' 2025-12-04T10:14:36.4339588Z Entering 'third_party/ideep' 2025-12-04T10:14:36.4377478Z Entering 'third_party/ideep/mkl-dnn' 2025-12-04T10:14:36.4415482Z Entering 'third_party/ittapi' 2025-12-04T10:14:36.4438508Z Entering 'third_party/kineto' 2025-12-04T10:14:36.4473735Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-12-04T10:14:36.4499003Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-12-04T10:14:36.4519982Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-12-04T10:14:36.4541892Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-12-04T10:14:36.4565176Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-12-04T10:14:36.4604909Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-12-04T10:14:36.4631910Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-12-04T10:14:36.4658502Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-12-04T10:14:36.4690296Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-12-04T10:14:36.4716853Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-12-04T10:14:36.4753767Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp' 2025-12-04T10:14:36.4789399Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:36.4833697Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:36.4895764Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-12-04T10:14:36.4941985Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-12-04T10:14:36.4990206Z Entering 'third_party/kleidiai' 2025-12-04T10:14:36.5032308Z Entering 'third_party/mimalloc' 2025-12-04T10:14:36.5086666Z Entering 'third_party/nlohmann' 2025-12-04T10:14:36.5129362Z Entering 'third_party/onnx' 2025-12-04T10:14:36.5191680Z Entering 'third_party/onnx/third_party/pybind11' 2025-12-04T10:14:36.5219360Z Entering 'third_party/opentelemetry-cpp' 2025-12-04T10:14:36.5255635Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-12-04T10:14:36.5280238Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-12-04T10:14:36.5305874Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-12-04T10:14:36.5328486Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-12-04T10:14:36.5354131Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-12-04T10:14:36.5380470Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-12-04T10:14:36.5400318Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-12-04T10:14:36.5429239Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:36.5475435Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:36.5513269Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-12-04T10:14:36.5560216Z Entering 'third_party/pocketfft' 2025-12-04T10:14:36.5590841Z Entering 'third_party/protobuf' 2025-12-04T10:14:36.5622314Z Entering 'third_party/protobuf/third_party/benchmark' 2025-12-04T10:14:36.5664024Z Entering 'third_party/protobuf/third_party/googletest' 2025-12-04T10:14:36.5724554Z Entering 'third_party/psimd' 2025-12-04T10:14:36.5774838Z Entering 'third_party/pthreadpool' 2025-12-04T10:14:36.5809448Z Entering 'third_party/pybind11' 2025-12-04T10:14:36.5849325Z Entering 'third_party/python-peachpy' 2025-12-04T10:14:36.5875789Z Entering 'third_party/sleef' 2025-12-04T10:14:36.5905944Z Entering 'third_party/tensorpipe' 2025-12-04T10:14:36.5931090Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-12-04T10:14:36.5956898Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-12-04T10:14:36.6002137Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-12-04T10:14:36.6024825Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-12-04T10:14:36.6061506Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-12-04T10:14:36.6142481Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader 2025-12-04T10:14:36.6174619Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || :" 2025-12-04T10:14:36.6392518Z Entering 'android/libs/fbjni' 2025-12-04T10:14:36.6437566Z Entering 'third_party/FP16' 2025-12-04T10:14:36.6475613Z Entering 'third_party/FXdiv' 2025-12-04T10:14:36.6517053Z Entering 'third_party/NNPACK' 2025-12-04T10:14:36.6554211Z Entering 'third_party/NVTX' 2025-12-04T10:14:36.6582736Z Entering 'third_party/VulkanMemoryAllocator' 2025-12-04T10:14:36.6607624Z Entering 'third_party/XNNPACK' 2025-12-04T10:14:36.6633061Z Entering 'third_party/aiter' 2025-12-04T10:14:36.6673661Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-12-04T10:14:36.6716216Z Entering 'third_party/benchmark' 2025-12-04T10:14:36.6749582Z Entering 'third_party/composable_kernel' 2025-12-04T10:14:36.6785979Z Entering 'third_party/cpp-httplib' 2025-12-04T10:14:36.6833972Z Entering 'third_party/cpuinfo' 2025-12-04T10:14:36.6857701Z Entering 'third_party/cudnn_frontend' 2025-12-04T10:14:36.6879141Z Entering 'third_party/cutlass' 2025-12-04T10:14:36.6902598Z Entering 'third_party/fbgemm' 2025-12-04T10:14:36.6924985Z Entering 'third_party/fbgemm/external/asmjit' 2025-12-04T10:14:36.6953926Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-12-04T10:14:36.6979157Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-12-04T10:14:36.7001277Z Entering 'third_party/fbgemm/external/cutlass' 2025-12-04T10:14:36.7023509Z Entering 'third_party/fbgemm/external/googletest' 2025-12-04T10:14:36.7045425Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-12-04T10:14:36.7078757Z Entering 'third_party/fbgemm/external/json' 2025-12-04T10:14:36.7104270Z Entering 'third_party/flash-attention' 2025-12-04T10:14:36.7156448Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-12-04T10:14:36.7217884Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-12-04T10:14:36.7291697Z Entering 'third_party/flatbuffers' 2025-12-04T10:14:36.7342564Z Entering 'third_party/fmt' 2025-12-04T10:14:36.7372320Z Entering 'third_party/gemmlowp/gemmlowp' 2025-12-04T10:14:36.7403519Z Entering 'third_party/gloo' 2025-12-04T10:14:36.7425119Z Entering 'third_party/googletest' 2025-12-04T10:14:36.7469549Z Entering 'third_party/ideep' 2025-12-04T10:14:36.7512006Z Entering 'third_party/ideep/mkl-dnn' 2025-12-04T10:14:36.7563611Z Entering 'third_party/ittapi' 2025-12-04T10:14:36.7597203Z Entering 'third_party/kineto' 2025-12-04T10:14:36.7622950Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-12-04T10:14:36.7680040Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-12-04T10:14:36.7706340Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-12-04T10:14:36.7754932Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-12-04T10:14:36.7782885Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-12-04T10:14:36.7839178Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-12-04T10:14:36.7872825Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-12-04T10:14:36.7921717Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-12-04T10:14:36.7949421Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-12-04T10:14:36.7993381Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-12-04T10:14:36.8044815Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp' 2025-12-04T10:14:36.8080473Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:36.8125732Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:36.8154655Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-12-04T10:14:36.8201257Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-12-04T10:14:36.8226861Z Entering 'third_party/kleidiai' 2025-12-04T10:14:36.8268079Z Entering 'third_party/mimalloc' 2025-12-04T10:14:36.8297757Z Entering 'third_party/nlohmann' 2025-12-04T10:14:36.8324377Z Entering 'third_party/onnx' 2025-12-04T10:14:36.8365709Z Entering 'third_party/onnx/third_party/pybind11' 2025-12-04T10:14:36.8392083Z Entering 'third_party/opentelemetry-cpp' 2025-12-04T10:14:36.8424351Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-12-04T10:14:36.8467449Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-12-04T10:14:36.8507985Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-12-04T10:14:36.8536074Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-12-04T10:14:36.8573493Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-12-04T10:14:36.8612841Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-12-04T10:14:36.8635480Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-12-04T10:14:36.8656696Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:36.8680005Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:36.8712807Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-12-04T10:14:36.8745208Z Entering 'third_party/pocketfft' 2025-12-04T10:14:36.8777259Z Entering 'third_party/protobuf' 2025-12-04T10:14:36.8813883Z Entering 'third_party/protobuf/third_party/benchmark' 2025-12-04T10:14:36.8842001Z Entering 'third_party/protobuf/third_party/googletest' 2025-12-04T10:14:36.8877224Z Entering 'third_party/psimd' 2025-12-04T10:14:36.8900927Z Entering 'third_party/pthreadpool' 2025-12-04T10:14:36.8931546Z Entering 'third_party/pybind11' 2025-12-04T10:14:36.8954795Z Entering 'third_party/python-peachpy' 2025-12-04T10:14:36.8977978Z Entering 'third_party/sleef' 2025-12-04T10:14:36.8998581Z Entering 'third_party/tensorpipe' 2025-12-04T10:14:36.9021733Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-12-04T10:14:36.9044272Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-12-04T10:14:36.9071701Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-12-04T10:14:36.9091456Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-12-04T10:14:36.9123144Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-12-04T10:14:36.9198022Z [command]/usr/bin/git config --local --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:36.9247879Z [command]/usr/bin/git submodule foreach --recursive git config --local --show-origin --name-only --get-regexp remote.origin.url 2025-12-04T10:14:36.9477999Z Entering 'android/libs/fbjni' 2025-12-04T10:14:36.9487252Z file:/home/runner/_work/pytorch/pytorch/.git/modules/android/libs/fbjni/config remote.origin.url 2025-12-04T10:14:36.9496493Z Entering 'third_party/FP16' 2025-12-04T10:14:36.9506771Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FP16/config remote.origin.url 2025-12-04T10:14:36.9515954Z Entering 'third_party/FXdiv' 2025-12-04T10:14:36.9532548Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FXdiv/config remote.origin.url 2025-12-04T10:14:36.9551839Z Entering 'third_party/NNPACK' 2025-12-04T10:14:36.9571519Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK/config remote.origin.url 2025-12-04T10:14:36.9581595Z Entering 'third_party/NVTX' 2025-12-04T10:14:36.9591680Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NVTX/config remote.origin.url 2025-12-04T10:14:36.9601139Z Entering 'third_party/VulkanMemoryAllocator' 2025-12-04T10:14:36.9617372Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/VulkanMemoryAllocator/config remote.origin.url 2025-12-04T10:14:36.9630797Z Entering 'third_party/XNNPACK' 2025-12-04T10:14:36.9650705Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/XNNPACK/config remote.origin.url 2025-12-04T10:14:36.9665145Z Entering 'third_party/aiter' 2025-12-04T10:14:36.9674970Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/aiter/config remote.origin.url 2025-12-04T10:14:36.9685209Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-12-04T10:14:36.9699977Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/aiter/modules/3rdparty/composable_kernel/config remote.origin.url 2025-12-04T10:14:36.9712974Z Entering 'third_party/benchmark' 2025-12-04T10:14:36.9732862Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/benchmark/config remote.origin.url 2025-12-04T10:14:36.9757541Z Entering 'third_party/composable_kernel' 2025-12-04T10:14:36.9769627Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/composable_kernel/config remote.origin.url 2025-12-04T10:14:36.9783078Z Entering 'third_party/cpp-httplib' 2025-12-04T10:14:36.9799288Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/cpp-httplib/config remote.origin.url 2025-12-04T10:14:36.9809165Z Entering 'third_party/cpuinfo' 2025-12-04T10:14:36.9838104Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/cpuinfo/config remote.origin.url 2025-12-04T10:14:36.9858348Z Entering 'third_party/cudnn_frontend' 2025-12-04T10:14:36.9874533Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/cudnn_frontend/config remote.origin.url 2025-12-04T10:14:36.9883452Z Entering 'third_party/cutlass' 2025-12-04T10:14:36.9900111Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/cutlass/config remote.origin.url 2025-12-04T10:14:36.9915125Z Entering 'third_party/fbgemm' 2025-12-04T10:14:36.9943520Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/config remote.origin.url 2025-12-04T10:14:36.9952723Z Entering 'third_party/fbgemm/external/asmjit' 2025-12-04T10:14:36.9975463Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/asmjit/config remote.origin.url 2025-12-04T10:14:36.9994536Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-12-04T10:14:37.0005359Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/composable_kernel/config remote.origin.url 2025-12-04T10:14:37.0032881Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-12-04T10:14:37.0052809Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/cpuinfo/config remote.origin.url 2025-12-04T10:14:37.0062665Z Entering 'third_party/fbgemm/external/cutlass' 2025-12-04T10:14:37.0085078Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/cutlass/config remote.origin.url 2025-12-04T10:14:37.0096665Z Entering 'third_party/fbgemm/external/googletest' 2025-12-04T10:14:37.0116781Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/googletest/config remote.origin.url 2025-12-04T10:14:37.0125973Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-12-04T10:14:37.0147234Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/hipify_torch/config remote.origin.url 2025-12-04T10:14:37.0155666Z Entering 'third_party/fbgemm/external/json' 2025-12-04T10:14:37.0185360Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/json/config remote.origin.url 2025-12-04T10:14:37.0196588Z Entering 'third_party/flash-attention' 2025-12-04T10:14:37.0212515Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/config remote.origin.url 2025-12-04T10:14:37.0233917Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-12-04T10:14:37.0261367Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/modules/csrc/composable_kernel/config remote.origin.url 2025-12-04T10:14:37.0274340Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-12-04T10:14:37.0299284Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/modules/csrc/cutlass/config remote.origin.url 2025-12-04T10:14:37.0341866Z Entering 'third_party/flatbuffers' 2025-12-04T10:14:37.0358383Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/flatbuffers/config remote.origin.url 2025-12-04T10:14:37.0382061Z Entering 'third_party/fmt' 2025-12-04T10:14:37.0393355Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fmt/config remote.origin.url 2025-12-04T10:14:37.0412697Z Entering 'third_party/gemmlowp/gemmlowp' 2025-12-04T10:14:37.0437374Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/gemmlowp/gemmlowp/config remote.origin.url 2025-12-04T10:14:37.0457827Z Entering 'third_party/gloo' 2025-12-04T10:14:37.0468538Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/gloo/config remote.origin.url 2025-12-04T10:14:37.0477803Z Entering 'third_party/googletest' 2025-12-04T10:14:37.0496759Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/googletest/config remote.origin.url 2025-12-04T10:14:37.0506492Z Entering 'third_party/ideep' 2025-12-04T10:14:37.0527049Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/config remote.origin.url 2025-12-04T10:14:37.0536823Z Entering 'third_party/ideep/mkl-dnn' 2025-12-04T10:14:37.0562074Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/modules/mkl-dnn/config remote.origin.url 2025-12-04T10:14:37.0575670Z Entering 'third_party/ittapi' 2025-12-04T10:14:37.0586340Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/ittapi/config remote.origin.url 2025-12-04T10:14:37.0594523Z Entering 'third_party/kineto' 2025-12-04T10:14:37.0604523Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/config remote.origin.url 2025-12-04T10:14:37.0623565Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-12-04T10:14:37.0648670Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/config remote.origin.url 2025-12-04T10:14:37.0669415Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-12-04T10:14:37.0681092Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/DCGM/config remote.origin.url 2025-12-04T10:14:37.0690335Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-12-04T10:14:37.0709775Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/cpr/config remote.origin.url 2025-12-04T10:14:37.0718954Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-12-04T10:14:37.0731391Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/fmt/config remote.origin.url 2025-12-04T10:14:37.0738440Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-12-04T10:14:37.0758325Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/config remote.origin.url 2025-12-04T10:14:37.0767151Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-12-04T10:14:37.0781477Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/modules/doc/config remote.origin.url 2025-12-04T10:14:37.0803545Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-12-04T10:14:37.0829768Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/glog/config remote.origin.url 2025-12-04T10:14:37.0849834Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-12-04T10:14:37.0860370Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/googletest/config remote.origin.url 2025-12-04T10:14:37.0869766Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-12-04T10:14:37.0879940Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/json/config remote.origin.url 2025-12-04T10:14:37.0896784Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-12-04T10:14:37.0913338Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/pfs/config remote.origin.url 2025-12-04T10:14:37.0922210Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp' 2025-12-04T10:14:37.0932330Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/prometheus-cpp/config remote.origin.url 2025-12-04T10:14:37.0941228Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:37.0969141Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/prometheus-cpp/modules/civetweb/config remote.origin.url 2025-12-04T10:14:37.0992029Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:37.1019105Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/prometheus-cpp/modules/googletest/config remote.origin.url 2025-12-04T10:14:37.1045419Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-12-04T10:14:37.1059317Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/fmt/config remote.origin.url 2025-12-04T10:14:37.1069264Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-12-04T10:14:37.1093991Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/googletest/config remote.origin.url 2025-12-04T10:14:37.1105493Z Entering 'third_party/kleidiai' 2025-12-04T10:14:37.1117477Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kleidiai/config remote.origin.url 2025-12-04T10:14:37.1128535Z Entering 'third_party/mimalloc' 2025-12-04T10:14:37.1148019Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/mimalloc/config remote.origin.url 2025-12-04T10:14:37.1159817Z Entering 'third_party/nlohmann' 2025-12-04T10:14:37.1169669Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/nlohmann/config remote.origin.url 2025-12-04T10:14:37.1179736Z Entering 'third_party/onnx' 2025-12-04T10:14:37.1195184Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/config remote.origin.url 2025-12-04T10:14:37.1213791Z Entering 'third_party/onnx/third_party/pybind11' 2025-12-04T10:14:37.1230102Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/modules/third_party/pybind11/config remote.origin.url 2025-12-04T10:14:37.1243054Z Entering 'third_party/opentelemetry-cpp' 2025-12-04T10:14:37.1255585Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/config remote.origin.url 2025-12-04T10:14:37.1276102Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-12-04T10:14:37.1286934Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/benchmark/config remote.origin.url 2025-12-04T10:14:37.1295575Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-12-04T10:14:37.1312677Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/googletest/config remote.origin.url 2025-12-04T10:14:37.1321770Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-12-04T10:14:37.1339545Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/ms-gsl/config remote.origin.url 2025-12-04T10:14:37.1348584Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-12-04T10:14:37.1371604Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/nlohmann-json/config remote.origin.url 2025-12-04T10:14:37.1383693Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-12-04T10:14:37.1399361Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/opentelemetry-proto/config remote.origin.url 2025-12-04T10:14:37.1419161Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-12-04T10:14:37.1439174Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/opentracing-cpp/config remote.origin.url 2025-12-04T10:14:37.1448578Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-12-04T10:14:37.1457838Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/config remote.origin.url 2025-12-04T10:14:37.1466176Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:37.1475694Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/modules/civetweb/config remote.origin.url 2025-12-04T10:14:37.1497306Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:37.1507546Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/modules/googletest/config remote.origin.url 2025-12-04T10:14:37.1518097Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-12-04T10:14:37.1527865Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/tools/vcpkg/config remote.origin.url 2025-12-04T10:14:37.1542725Z Entering 'third_party/pocketfft' 2025-12-04T10:14:37.1552944Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/pocketfft/config remote.origin.url 2025-12-04T10:14:37.1570445Z Entering 'third_party/protobuf' 2025-12-04T10:14:37.1581423Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/config remote.origin.url 2025-12-04T10:14:37.1603649Z Entering 'third_party/protobuf/third_party/benchmark' 2025-12-04T10:14:37.1614499Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/benchmark/config remote.origin.url 2025-12-04T10:14:37.1634068Z Entering 'third_party/protobuf/third_party/googletest' 2025-12-04T10:14:37.1646546Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/googletest/config remote.origin.url 2025-12-04T10:14:37.1666857Z Entering 'third_party/psimd' 2025-12-04T10:14:37.1677155Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/psimd/config remote.origin.url 2025-12-04T10:14:37.1686433Z Entering 'third_party/pthreadpool' 2025-12-04T10:14:37.1702841Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/pthreadpool/config remote.origin.url 2025-12-04T10:14:37.1711985Z Entering 'third_party/pybind11' 2025-12-04T10:14:37.1722248Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/pybind11/config remote.origin.url 2025-12-04T10:14:37.1742976Z Entering 'third_party/python-peachpy' 2025-12-04T10:14:37.1753422Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/python-peachpy/config remote.origin.url 2025-12-04T10:14:37.1762440Z Entering 'third_party/sleef' 2025-12-04T10:14:37.1772988Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/sleef/config remote.origin.url 2025-12-04T10:14:37.1782198Z Entering 'third_party/tensorpipe' 2025-12-04T10:14:37.1792268Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/config remote.origin.url 2025-12-04T10:14:37.1803085Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-12-04T10:14:37.1815279Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/googletest/config remote.origin.url 2025-12-04T10:14:37.1824825Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-12-04T10:14:37.1834316Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libnop/config remote.origin.url 2025-12-04T10:14:37.1842941Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-12-04T10:14:37.1862261Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libuv/config remote.origin.url 2025-12-04T10:14:37.1872313Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-12-04T10:14:37.1898888Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/config remote.origin.url 2025-12-04T10:14:37.1908509Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-12-04T10:14:37.1928849Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/modules/tools/clang/config remote.origin.url 2025-12-04T10:14:37.1955695Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/android/libs/fbjni/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.1987177Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FP16/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2056877Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FXdiv/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2059010Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2060862Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/NVTX/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2062736Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/VulkanMemoryAllocator/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2084294Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/XNNPACK/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2112408Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/aiter/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2128400Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/aiter/modules/3rdparty/composable_kernel/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2144424Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/benchmark/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2171310Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/composable_kernel/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2198901Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/cpp-httplib/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2227419Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/cpuinfo/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2244558Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/cudnn_frontend/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2270851Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/cutlass/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2287286Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2302918Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/asmjit/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2320188Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/composable_kernel/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2334571Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/cpuinfo/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2361172Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/cutlass/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2386987Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2404849Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/hipify_torch/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2431624Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/json/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2456728Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2473699Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/modules/csrc/composable_kernel/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2492901Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/modules/csrc/cutlass/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2509380Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/flatbuffers/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2536483Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fmt/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2562900Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/gemmlowp/gemmlowp/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2578903Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/gloo/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2604620Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2620979Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2639569Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/modules/mkl-dnn/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2659904Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/ittapi/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2682532Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2711449Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2739616Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/DCGM/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2771890Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/cpr/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2799538Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/fmt/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2830889Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2861791Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/modules/doc/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2891830Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/glog/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2908971Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2936222Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/json/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2964476Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/pfs/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.2986071Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/prometheus-cpp/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3009793Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/prometheus-cpp/modules/civetweb/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3036760Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/prometheus-cpp/modules/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3052931Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/fmt/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3069540Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3085420Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kleidiai/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3110185Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/mimalloc/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3141405Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/nlohmann/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3169165Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3186388Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/modules/third_party/pybind11/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3209179Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3237272Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/benchmark/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3263866Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3279908Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/ms-gsl/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3310927Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/nlohmann-json/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3339728Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/opentelemetry-proto/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3356462Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/opentracing-cpp/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3373316Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3399468Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/modules/civetweb/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3416187Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/modules/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3444346Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/tools/vcpkg/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3461354Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/pocketfft/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3479173Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3496788Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/benchmark/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3524633Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3540797Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/psimd/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3568811Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/pthreadpool/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3596388Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/pybind11/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3625559Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/python-peachpy/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3642569Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/sleef/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3671924Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3688702Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3716844Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libnop/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3733501Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libuv/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3757146Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3773412Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/modules/tools/clang/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:37.3805403Z [command]/usr/bin/git config --local http.https://github.com/.extraheader AUTHORIZATION: basic *** 2025-12-04T10:14:37.3835129Z ##[endgroup] 2025-12-04T10:14:37.3835637Z ##[group]Fetching the repository 2025-12-04T10:14:37.3839450Z [command]/usr/bin/git -c protocol.version=2 fetch --prune --no-recurse-submodules origin +refs/heads/*:refs/remotes/origin/* +refs/tags/*:refs/tags/* 2025-12-04T10:14:41.1603446Z From https://github.com/pytorch/pytorch 2025-12-04T10:14:41.1604122Z * [new branch] 2.6.0.dev20241004+ -> origin/2.6.0.dev20241004+ 2025-12-04T10:14:41.1604718Z * [new branch] 2.9.1 -> origin/2.9.1 2025-12-04T10:14:41.1605394Z * [new branch] AaronWang04_addmmfusion_perftest -> origin/AaronWang04_addmmfusion_perftest 2025-12-04T10:14:41.1606148Z * [new branch] Flamefire-patch-1 -> origin/Flamefire-patch-1 2025-12-04T10:14:41.1606958Z * [new branch] HDCharles-2.6.0-release-notes -> origin/HDCharles-2.6.0-release-notes 2025-12-04T10:14:41.1607629Z * [new branch] HOPrintFunc -> origin/HOPrintFunc 2025-12-04T10:14:41.1608216Z * [new branch] IvanKobzarev/stack/1 -> origin/IvanKobzarev/stack/1 2025-12-04T10:14:41.1608824Z * [new branch] NicoshevSVE128 -> origin/NicoshevSVE128 2025-12-04T10:14:41.1609449Z * [new branch] PR-AOTInductorNoneBug -> origin/PR-AOTInductorNoneBug 2025-12-04T10:14:41.1610127Z * [new branch] PR-AOTInductorNoneBugFix -> origin/PR-AOTInductorNoneBugFix 2025-12-04T10:14:41.1610877Z * [new branch] PR-FixConfigsIssue -> origin/PR-FixConfigsIssue 2025-12-04T10:14:41.1611518Z * [new branch] PR-NoneBugFix-viable -> origin/PR-NoneBugFix-viable 2025-12-04T10:14:41.1612119Z * [new branch] PR-ResetToZero -> origin/PR-ResetToZero 2025-12-04T10:14:41.1613684Z * [new branch] Update-Flash-Packaging -> origin/Update-Flash-Packaging 2025-12-04T10:14:41.1614295Z * [new branch] VLA_exp -> origin/VLA_exp 2025-12-04T10:14:41.1614854Z * [new branch] activation_bench -> origin/activation_bench 2025-12-04T10:14:41.1615446Z * [new branch] addmm-heuristic -> origin/addmm-heuristic 2025-12-04T10:14:41.1616046Z * [new branch] adi/onednn_aarch64 -> origin/adi/onednn_aarch64 2025-12-04T10:14:41.1616609Z * [new branch] adi/test -> origin/adi/test 2025-12-04T10:14:41.1617165Z * [new branch] adi/test_bgemm -> origin/adi/test_bgemm 2025-12-04T10:14:41.1617719Z * [new branch] adi/test_m8g -> origin/adi/test_m8g 2025-12-04T10:14:41.1618283Z * [new branch] adi/test_onednn -> origin/adi/test_onednn 2025-12-04T10:14:41.1618875Z * [new branch] adi/test_onednn_v3.9 -> origin/adi/test_onednn_v3.9 2025-12-04T10:14:41.1619492Z * [new branch] adi/test_presve_change -> origin/adi/test_presve_change 2025-12-04T10:14:41.1620093Z * [new branch] adi/test_timm -> origin/adi/test_timm 2025-12-04T10:14:41.1620934Z * [new branch] adi/testpresve_change -> origin/adi/testpresve_change 2025-12-04T10:14:41.1621570Z * [new branch] aditew01/test/vec_bf16 -> origin/aditew01/test/vec_bf16 2025-12-04T10:14:41.1622217Z * [new branch] ah-globalfeedback-hook -> origin/ah-globalfeedback-hook 2025-12-04T10:14:41.1622881Z * [new branch] albanD-patch-1 -> origin/albanD-patch-1 2025-12-04T10:14:41.1623493Z * [new branch] also-surround-shimh -> origin/also-surround-shimh 2025-12-04T10:14:41.1624117Z * [new branch] angelayi/aot_compile -> origin/angelayi/aot_compile 2025-12-04T10:14:41.1624839Z * [new branch] angelayi/aoti_additional_files -> origin/angelayi/aoti_additional_files 2025-12-04T10:14:41.1625530Z * [new branch] angelayi/benchmark -> origin/angelayi/benchmark 2025-12-04T10:14:41.1626280Z * [new branch] angelayi/change_pytree_serialization -> origin/angelayi/change_pytree_serialization 2025-12-04T10:14:41.1627045Z * [new branch] angelayi/cpp_loader -> origin/angelayi/cpp_loader 2025-12-04T10:14:41.1627688Z * [new branch] angelayi/inductor_const -> origin/angelayi/inductor_const 2025-12-04T10:14:41.1628296Z * [new branch] angelayi/lstm -> origin/angelayi/lstm 2025-12-04T10:14:41.1628886Z * [new branch] angelayi/no_so_weight -> origin/angelayi/no_so_weight 2025-12-04T10:14:41.1629492Z * [new branch] angelayi/scan_layers -> origin/angelayi/scan_layers 2025-12-04T10:14:41.1630097Z * [new branch] angelayi/side_eff -> origin/angelayi/side_eff 2025-12-04T10:14:41.1630778Z * [new branch] angelayi/state_dict -> origin/angelayi/state_dict 2025-12-04T10:14:41.1631394Z * [new branch] angelayi/symint_input -> origin/angelayi/symint_input 2025-12-04T10:14:41.1632005Z * [new branch] angelayi/symm_mem -> origin/angelayi/symm_mem 2025-12-04T10:14:41.1632617Z * [new branch] angelayi/test_cpp -> origin/angelayi/test_cpp 2025-12-04T10:14:41.1633202Z * [new branch] angelayi/torch_size -> origin/angelayi/torch_size 2025-12-04T10:14:41.1633793Z * [new branch] annotate_assert -> origin/annotate_assert 2025-12-04T10:14:41.1634406Z * [new branch] annotate_fallback_kernel -> origin/annotate_fallback_kernel 2025-12-04T10:14:41.1635050Z * [new branch] annotation_deepcopy -> origin/annotation_deepcopy 2025-12-04T10:14:41.1635646Z * [new branch] annotation_dynamo -> origin/annotation_dynamo 2025-12-04T10:14:41.1636462Z * [new branch] aot_eager_stack_trace -> origin/aot_eager_stack_trace 2025-12-04T10:14:41.1637058Z * [new branch] aoti-cuda-alloc -> origin/aoti-cuda-alloc 2025-12-04T10:14:41.1637646Z * [new branch] aoti_const_device -> origin/aoti_const_device 2025-12-04T10:14:41.1638247Z * [new branch] aoti_fqn_name_interface -> origin/aoti_fqn_name_interface 2025-12-04T10:14:41.1638923Z * [new branch] aoti_package_weights_binary -> origin/aoti_package_weights_binary 2025-12-04T10:14:41.1639584Z * [new branch] aoti_target_windows -> origin/aoti_target_windows 2025-12-04T10:14:41.1640311Z * [new branch] arsh/feat/inductor_check_profiling -> origin/arsh/feat/inductor_check_profiling 2025-12-04T10:14:41.1641068Z * [new branch] async_tp -> origin/async_tp 2025-12-04T10:14:41.1641740Z * [new branch] atalman-inductor-perf-cu124 -> origin/atalman-inductor-perf-cu124 2025-12-04T10:14:41.1642532Z * [new branch] atalman-inductor-perf-cu124.1 -> origin/atalman-inductor-perf-cu124.1 2025-12-04T10:14:41.1643239Z * [new branch] atalman-patch-2 -> origin/atalman-patch-2 2025-12-04T10:14:41.1643830Z * [new branch] atalman-patch-3 -> origin/atalman-patch-3 2025-12-04T10:14:41.1644510Z * [new branch] atalman-patch-4 -> origin/atalman-patch-4 2025-12-04T10:14:41.1645100Z * [new branch] atalman-patch-5 -> origin/atalman-patch-5 2025-12-04T10:14:41.1645672Z * [new branch] atalman-patch-6 -> origin/atalman-patch-6 2025-12-04T10:14:41.1646231Z * [new branch] atalman-patch-7 -> origin/atalman-patch-7 2025-12-04T10:14:41.1646799Z * [new branch] atalman-patch-8 -> origin/atalman-patch-8 2025-12-04T10:14:41.1647415Z * [new branch] atalman_inductor_2.3.1 -> origin/atalman_inductor_2.3.1 2025-12-04T10:14:41.1648051Z * [new branch] atalman_inductor_2.4.0 -> origin/atalman_inductor_2.4.0 2025-12-04T10:14:41.1648675Z * [new branch] atalman_inductor_2.4.x -> origin/atalman_inductor_2.4.x 2025-12-04T10:14:41.1649369Z * [new branch] attention_benchmarking_clean -> origin/attention_benchmarking_clean 2025-12-04T10:14:41.1650071Z * [new branch] bahuang/dt_fix_scalar_add -> origin/bahuang/dt_fix_scalar_add 2025-12-04T10:14:41.1650787Z * [new branch] bahuang/fix_debug_mode -> origin/bahuang/fix_debug_mode 2025-12-04T10:14:41.1651401Z * [new branch] bahuang/fix_expand -> origin/bahuang/fix_expand 2025-12-04T10:14:41.1651988Z * [new branch] bahuang/test -> origin/bahuang/test 2025-12-04T10:14:41.1652533Z * [new branch] base/1.5 -> origin/base/1.5 2025-12-04T10:14:41.1653209Z * [new branch] batching_sdpa_efficient_attention -> origin/batching_sdpa_efficient_attention 2025-12-04T10:14:41.1653920Z * [new branch] bench_scaled_mm_ops -> origin/bench_scaled_mm_ops 2025-12-04T10:14:41.1654531Z * [new branch] benchmark-updates -> origin/benchmark-updates 2025-12-04T10:14:41.1655156Z * [new branch] benchmarking-script -> origin/benchmarking-script 2025-12-04T10:14:41.1655782Z * [new branch] bertmaher/pinbump26 -> origin/bertmaher/pinbump26 2025-12-04T10:14:41.1656389Z * [new branch] bertrand/cutlass -> origin/bertrand/cutlass 2025-12-04T10:14:41.1656987Z * [new branch] bf/bug-static-input -> origin/bf/bug-static-input 2025-12-04T10:14:41.1657575Z * [new branch] bf/cg-backend -> origin/bf/cg-backend 2025-12-04T10:14:41.1658135Z * [new branch] bf/cg-nccl-test -> origin/bf/cg-nccl-test 2025-12-04T10:14:41.1658817Z * [new branch] bf/cg-remove-check -> origin/bf/cg-remove-check 2025-12-04T10:14:41.1659448Z * [new branch] bf/clean-torchbench-hf -> origin/bf/clean-torchbench-hf 2025-12-04T10:14:41.1660073Z * [new branch] bf/combo-debug-log -> origin/bf/combo-debug-log 2025-12-04T10:14:41.1660730Z * [new branch] bf/cudagraph -> origin/bf/cudagraph 2025-12-04T10:14:41.1661487Z * [new branch] bf/cudagraph-disable-input-mutation -> origin/bf/cudagraph-disable-input-mutation 2025-12-04T10:14:41.1662844Z * [new branch] bf/cudagraph-enable-input-mutation-support-benchmark -> origin/bf/cudagraph-enable-input-mutation-support-benchmark 2025-12-04T10:14:41.1663860Z * [new branch] bf/cudagraph-partition -> origin/bf/cudagraph-partition 2025-12-04T10:14:41.1664526Z * [new branch] bf/donated-buffer-bench -> origin/bf/donated-buffer-bench 2025-12-04T10:14:41.1665176Z * [new branch] bf/dynamo-partition -> origin/bf/dynamo-partition 2025-12-04T10:14:41.1665753Z * [new branch] bf/lite -> origin/bf/lite 2025-12-04T10:14:41.1666317Z * [new branch] bf/pa-non-divisible -> origin/bf/pa-non-divisible 2025-12-04T10:14:41.1667056Z * [new branch] bf/partition-cache-free-symbols -> origin/bf/partition-cache-free-symbols 2025-12-04T10:14:41.1667955Z * [new branch] bf/partition-memory-plan -> origin/bf/partition-memory-plan 2025-12-04T10:14:41.1668630Z * [new branch] bf/partition-move-cpu -> origin/bf/partition-move-cpu 2025-12-04T10:14:41.1669329Z * [new branch] bf/partition-view-fallback -> origin/bf/partition-view-fallback 2025-12-04T10:14:41.1670031Z * [new branch] bf/remove-check-55b0c39d -> origin/bf/remove-check-55b0c39d 2025-12-04T10:14:41.1670724Z * [new branch] bf/timm-nov-26-2025 -> origin/bf/timm-nov-26-2025 2025-12-04T10:14:41.1671400Z * [new branch] bf/transformer-pin-4-57-3 -> origin/bf/transformer-pin-4-57-3 2025-12-04T10:14:41.1672132Z * [new branch] bisect_perf_hf_T5_3acc6eac492 -> origin/bisect_perf_hf_T5_3acc6eac492 2025-12-04T10:14:41.1672843Z * [new branch] bisect_perf_hf_T5_3fcf66f61fb -> origin/bisect_perf_hf_T5_3fcf66f61fb 2025-12-04T10:14:41.1673541Z * [new branch] bisect_perf_hf_T5_4009d154129 -> origin/bisect_perf_hf_T5_4009d154129 2025-12-04T10:14:41.1674235Z * [new branch] bisect_perf_hf_T5_40d0740e73d -> origin/bisect_perf_hf_T5_40d0740e73d 2025-12-04T10:14:41.1674921Z * [new branch] bisect_perf_hf_T5_5268754e -> origin/bisect_perf_hf_T5_5268754e 2025-12-04T10:14:41.1675603Z * [new branch] bisect_perf_hf_T5_7d89a8d385c -> origin/bisect_perf_hf_T5_7d89a8d385c 2025-12-04T10:14:41.1676313Z * [new branch] bisect_perf_hf_T5_b7a25c1ee7c -> origin/bisect_perf_hf_T5_b7a25c1ee7c 2025-12-04T10:14:41.1677009Z * [new branch] bisect_perf_hf_T5_c25b201583f -> origin/bisect_perf_hf_T5_c25b201583f 2025-12-04T10:14:41.1677702Z * [new branch] bisect_perf_hf_T5_c93e57efac0 -> origin/bisect_perf_hf_T5_c93e57efac0 2025-12-04T10:14:41.1678392Z * [new branch] bisect_perf_hf_T5_ca9813ea149 -> origin/bisect_perf_hf_T5_ca9813ea149 2025-12-04T10:14:41.1679073Z * [new branch] bisect_perf_hf_T5_d65f194a -> origin/bisect_perf_hf_T5_d65f194a 2025-12-04T10:14:41.1679733Z * [new branch] bisect_perf_hf_T5_da94ab0b -> origin/bisect_perf_hf_T5_da94ab0b 2025-12-04T10:14:41.1680419Z * [new branch] bisect_perf_hf_T5_da94ab0b_new -> origin/bisect_perf_hf_T5_da94ab0b_new 2025-12-04T10:14:41.1681185Z * [new branch] bisect_perf_hf_T5_db4e8a1d8a8 -> origin/bisect_perf_hf_T5_db4e8a1d8a8 2025-12-04T10:14:41.1681873Z * [new branch] bisect_perf_hf_T5_e0d97e936a2 -> origin/bisect_perf_hf_T5_e0d97e936a2 2025-12-04T10:14:41.1682654Z * [new branch] bisect_perf_hf_T5_f23621ec563 -> origin/bisect_perf_hf_T5_f23621ec563 2025-12-04T10:14:41.1683328Z * [new branch] brister/fx_device_type -> origin/brister/fx_device_type 2025-12-04T10:14:41.1684019Z * [new branch] brister/test_inductor_all_fx -> origin/brister/test_inductor_all_fx 2025-12-04T10:14:41.1684836Z * [new branch] brister/tiled_reduction_no_numel_check -> origin/brister/tiled_reduction_no_numel_check 2025-12-04T10:14:41.1685573Z * [new branch] bwd-backup -> origin/bwd-backup 2025-12-04T10:14:41.1686114Z * [new branch] c57382a49 -> origin/c57382a49 2025-12-04T10:14:41.1686642Z * [new branch] ca_0431d47eaa -> origin/ca_0431d47eaa 2025-12-04T10:14:41.1687189Z * [new branch] ca_fix_0431d47eaa -> origin/ca_fix_0431d47eaa 2025-12-04T10:14:41.1687854Z * [new branch] camyllh/test_setup_hooks_push -> origin/camyllh/test_setup_hooks_push 2025-12-04T10:14:41.1688522Z * [new branch] cccclai-patch-1 -> origin/cccclai-patch-1 2025-12-04T10:14:41.1689289Z * [new branch] cherry-pick-159969-by-pytorch_bot_bot_ -> origin/cherry-pick-159969-by-pytorch_bot_bot_ 2025-12-04T10:14:41.1690336Z * [new branch] cherry-pick-160586-by-pytorch_bot_bot_ -> origin/cherry-pick-160586-by-pytorch_bot_bot_ 2025-12-04T10:14:41.1691323Z * [new branch] cherry-pick-162208-by-pytorch_bot_bot_ -> origin/cherry-pick-162208-by-pytorch_bot_bot_ 2025-12-04T10:14:41.1692220Z * [new branch] cherry-pick-163169-by-pytorch_bot_bot_ -> origin/cherry-pick-163169-by-pytorch_bot_bot_ 2025-12-04T10:14:41.1693116Z * [new branch] cherry-pick-165086-by-pytorch_bot_bot_ -> origin/cherry-pick-165086-by-pytorch_bot_bot_ 2025-12-04T10:14:41.1694005Z * [new branch] cherry-pick-165514-by-pytorch_bot_bot_ -> origin/cherry-pick-165514-by-pytorch_bot_bot_ 2025-12-04T10:14:41.1694899Z * [new branch] cherry-pick-165601-by-pytorch_bot_bot_ -> origin/cherry-pick-165601-by-pytorch_bot_bot_ 2025-12-04T10:14:41.1695791Z * [new branch] cherry-pick-165667-by-pytorch_bot_bot_ -> origin/cherry-pick-165667-by-pytorch_bot_bot_ 2025-12-04T10:14:41.1696696Z * [new branch] cherry-pick-165815-by-pytorch_bot_bot_ -> origin/cherry-pick-165815-by-pytorch_bot_bot_ 2025-12-04T10:14:41.1697594Z * [new branch] cherry-pick-165922-by-pytorch_bot_bot_ -> origin/cherry-pick-165922-by-pytorch_bot_bot_ 2025-12-04T10:14:41.1698484Z * [new branch] cherry-pick-166148-by-pytorch_bot_bot_ -> origin/cherry-pick-166148-by-pytorch_bot_bot_ 2025-12-04T10:14:41.1699372Z * [new branch] cherry-pick-166181-by-pytorch_bot_bot_ -> origin/cherry-pick-166181-by-pytorch_bot_bot_ 2025-12-04T10:14:41.1700298Z * [new branch] cherry-pick-166404-by-pytorch_bot_bot_ -> origin/cherry-pick-166404-by-pytorch_bot_bot_ 2025-12-04T10:14:41.1701254Z * [new branch] cherry-pick-166427-by-pytorch_bot_bot_ -> origin/cherry-pick-166427-by-pytorch_bot_bot_ 2025-12-04T10:14:41.1702138Z * [new branch] cherry-pick-166480-by-pytorch_bot_bot_ -> origin/cherry-pick-166480-by-pytorch_bot_bot_ 2025-12-04T10:14:41.1703032Z * [new branch] cherry-pick-166570-by-pytorch_bot_bot_ -> origin/cherry-pick-166570-by-pytorch_bot_bot_ 2025-12-04T10:14:41.1703921Z * [new branch] cherry-pick-166993-by-pytorch_bot_bot_ -> origin/cherry-pick-166993-by-pytorch_bot_bot_ 2025-12-04T10:14:41.1704813Z * [new branch] cherry-pick-167111-by-pytorch_bot_bot_ -> origin/cherry-pick-167111-by-pytorch_bot_bot_ 2025-12-04T10:14:41.1705695Z * [new branch] cherry-pick-167478-by-pytorch_bot_bot_ -> origin/cherry-pick-167478-by-pytorch_bot_bot_ 2025-12-04T10:14:41.1706472Z * [new branch] cherry_pick_166036_166040 -> origin/cherry_pick_166036_166040 2025-12-04T10:14:41.1707202Z * [new branch] cherry_pick_166457 -> origin/cherry_pick_166457 2025-12-04T10:14:41.1707792Z * [new branch] cherrypick_166338 -> origin/cherrypick_166338 2025-12-04T10:14:41.1708365Z * [new branch] cherrypick_166458 -> origin/cherrypick_166458 2025-12-04T10:14:41.1708945Z * [new branch] cherrypick_166586 -> origin/cherrypick_166586 2025-12-04T10:14:41.1709521Z * [new branch] cherrypick_166956 -> origin/cherrypick_166956 2025-12-04T10:14:41.1710066Z * [new branch] ci_attn -> origin/ci_attn 2025-12-04T10:14:41.1710709Z * [new branch] codex-testing -> origin/codex-testing 2025-12-04T10:14:41.1711569Z * [new branch] codex/add-check_memory_overlap-helper-functions -> origin/codex/add-check_memory_overlap-helper-functions 2025-12-04T10:14:41.1712559Z * [new branch] codex/fix-issue-121219-in-pytorch -> origin/codex/fix-issue-121219-in-pytorch 2025-12-04T10:14:41.1713588Z * [new branch] codex/investigate-segfaults-in-get_tensor_storage_id -> origin/codex/investigate-segfaults-in-get_tensor_storage_id 2025-12-04T10:14:41.1714779Z * [new branch] codex/refactor-lintrunner-config-to-use-uv-run -> origin/codex/refactor-lintrunner-config-to-use-uv-run 2025-12-04T10:14:41.1715746Z * [new branch] compatiblpy39util -> origin/compatiblpy39util 2025-12-04T10:14:41.1716337Z * [new branch] cond_hop_device -> origin/cond_hop_device 2025-12-04T10:14:41.1716901Z * [new branch] context_test -> origin/context_test 2025-12-04T10:14:41.1717672Z * [new branch] copilot/code-style-cleanup-python-pip -> origin/copilot/code-style-cleanup-python-pip 2025-12-04T10:14:41.1718475Z * [new branch] cpio/fix_new_ami_tests -> origin/cpio/fix_new_ami_tests 2025-12-04T10:14:41.1719201Z * [new branch] cpp-docs-dependency-upgrade -> origin/cpp-docs-dependency-upgrade 2025-12-04T10:14:41.1720029Z * [new branch] crpa/typo-in-inductor_comm_lowering -> origin/crpa/typo-in-inductor_comm_lowering 2025-12-04T10:14:41.1720836Z * [new branch] csl/always_produce_xml -> origin/csl/always_produce_xml 2025-12-04T10:14:41.1721497Z * [new branch] csl/build_test_more_procs -> origin/csl/build_test_more_procs 2025-12-04T10:14:41.1722179Z * [new branch] csl/build_test_more_procs2 -> origin/csl/build_test_more_procs2 2025-12-04T10:14:41.1722799Z * [new branch] csl/clean_up -> origin/csl/clean_up 2025-12-04T10:14:41.1723419Z * [new branch] csl/fix_retry_segfault_exit -> origin/csl/fix_retry_segfault_exit 2025-12-04T10:14:41.1724028Z * [new branch] csl/katex -> origin/csl/katex 2025-12-04T10:14:41.1724586Z * [new branch] csl/larger_runner -> origin/csl/larger_runner 2025-12-04T10:14:41.1725166Z * [new branch] csl/lint_testing -> origin/csl/lint_testing 2025-12-04T10:14:41.1725730Z * [new branch] csl/lint_thing -> origin/csl/lint_thing 2025-12-04T10:14:41.1726327Z * [new branch] csl/lintrunner_stuff -> origin/csl/lintrunner_stuff 2025-12-04T10:14:41.1726947Z * [new branch] csl/manually_gen_json -> origin/csl/manually_gen_json 2025-12-04T10:14:41.1727545Z * [new branch] csl/mps_sharding -> origin/csl/mps_sharding 2025-12-04T10:14:41.1728151Z * [new branch] csl/multistage_docker -> origin/csl/multistage_docker 2025-12-04T10:14:41.1728746Z * [new branch] csl/print_timing -> origin/csl/print_timing 2025-12-04T10:14:41.1729343Z * [new branch] csl/remove_experiment -> origin/csl/remove_experiment 2025-12-04T10:14:41.1730095Z * [new branch] csl/remove_maybe_unused_var -> origin/csl/remove_maybe_unused_var 2025-12-04T10:14:41.1730918Z * [new branch] csl/remove_repo_specific_autolabel -> origin/csl/remove_repo_specific_autolabel 2025-12-04T10:14:41.1731649Z * [new branch] csl/remove_run_parallel -> origin/csl/remove_run_parallel 2025-12-04T10:14:41.1732276Z * [new branch] csl/remove_unused_vars -> origin/csl/remove_unused_vars 2025-12-04T10:14:41.1732877Z * [new branch] csl/revert_open -> origin/csl/revert_open 2025-12-04T10:14:41.1733455Z * [new branch] csl/skip_build -> origin/csl/skip_build 2025-12-04T10:14:41.1734089Z * [new branch] csl/smaller_avx_amx_runenrs -> origin/csl/smaller_avx_amx_runenrs 2025-12-04T10:14:41.1734717Z * [new branch] csl/td_job_level -> origin/csl/td_job_level 2025-12-04T10:14:41.1735392Z * [new branch] csl/test_cuda_build_large_runner -> origin/csl/test_cuda_build_large_runner 2025-12-04T10:14:41.1736214Z * [new branch] csl/test_owners_autograd_dispatch_nn -> origin/csl/test_owners_autograd_dispatch_nn 2025-12-04T10:14:41.1737028Z * [new branch] csl/test_owners_higher_confidence -> origin/csl/test_owners_higher_confidence 2025-12-04T10:14:41.1737754Z * [new branch] csl/upload_json_running -> origin/csl/upload_json_running 2025-12-04T10:14:41.1738452Z * [new branch] csl/win_sccache -> origin/csl/win_sccache 2025-12-04T10:14:41.1739003Z * [new branch] csl/xml_stuff -> origin/csl/xml_stuff 2025-12-04T10:14:41.1739564Z * [new branch] cublasrelax2 -> origin/cublasrelax2 2025-12-04T10:14:41.1740119Z * [new branch] cuda_mempool -> origin/cuda_mempool 2025-12-04T10:14:41.1740775Z * [new branch] custom_lowering_dict -> origin/custom_lowering_dict 2025-12-04T10:14:41.1741422Z * [new branch] d4l3k/debug_plane_frtrace -> origin/d4l3k/debug_plane_frtrace 2025-12-04T10:14:41.1742036Z * [new branch] daxia6/2.8o3 -> origin/daxia6/2.8o3 2025-12-04T10:14:41.1742580Z * [new branch] debug-guard -> origin/debug-guard 2025-12-04T10:14:41.1743162Z * [new branch] delete-quant-docs -> origin/delete-quant-docs 2025-12-04T10:14:41.1744240Z * [new branch] dependabot/pip/dot-ci/docker/ci_commit_pins/main/transformers-4.57.0 -> origin/dependabot/pip/dot-ci/docker/ci_commit_pins/main/transformers-4.57.0 2025-12-04T10:14:41.1745714Z * [new branch] dependabot/pip/dot-ci/docker/ci_commit_pins/main/transformers-4.57.1 -> origin/dependabot/pip/dot-ci/docker/ci_commit_pins/main/transformers-4.57.1 2025-12-04T10:14:41.1746818Z * [new branch] desertfire/test_cpp_wrapper -> origin/desertfire/test_cpp_wrapper 2025-12-04T10:14:41.1747604Z * [new branch] desertfire/triton-cpu-for-aarch64 -> origin/desertfire/triton-cpu-for-aarch64 2025-12-04T10:14:41.1748372Z * [new branch] dev/dhruva/flex_attn_opt -> origin/dev/dhruva/flex_attn_opt 2025-12-04T10:14:41.1749031Z * [new branch] dev/joona/MPSNDArrayAdd -> origin/dev/joona/MPSNDArrayAdd 2025-12-04T10:14:41.1749659Z * [new branch] dev/joona/Unranked -> origin/dev/joona/Unranked 2025-12-04T10:14:41.1750244Z * [new branch] dev/joona/cat -> origin/dev/joona/cat 2025-12-04T10:14:41.1750915Z * [new branch] dev/joona/embeddingbag -> origin/dev/joona/embeddingbag 2025-12-04T10:14:41.1751580Z * [new branch] dev/joona/fix_sdpa_memtest -> origin/dev/joona/fix_sdpa_memtest 2025-12-04T10:14:41.1752286Z * [new branch] dev/joona/getTensorsString -> origin/dev/joona/getTensorsString 2025-12-04T10:14:41.1753013Z * [new branch] dev/joona/mps_linear_macos14 -> origin/dev/joona/mps_linear_macos14 2025-12-04T10:14:41.1753801Z * [new branch] dev/joona/scalar_clamp -> origin/dev/joona/scalar_clamp 2025-12-04T10:14:41.1754395Z * [new branch] dev/joona/sdpa -> origin/dev/joona/sdpa 2025-12-04T10:14:41.1754981Z * [new branch] dev/joona/sdpa_api -> origin/dev/joona/sdpa_api 2025-12-04T10:14:41.1755578Z * [new branch] dev/joona/type_inf -> origin/dev/joona/type_inf 2025-12-04T10:14:41.1756226Z * [new branch] dev/joona/ulpAssertClose -> origin/dev/joona/ulpAssertClose 2025-12-04T10:14:41.1756860Z * [new branch] dev/joona/upsize3d -> origin/dev/joona/upsize3d 2025-12-04T10:14:41.1757429Z * [new branch] disp_counter -> origin/disp_counter 2025-12-04T10:14:41.1758017Z * [new branch] divyanshk-patch-1 -> origin/divyanshk-patch-1 2025-12-04T10:14:41.1758581Z * [new branch] docs -> origin/docs 2025-12-04T10:14:41.1759112Z * [new branch] documentation -> origin/documentation 2025-12-04T10:14:41.1759707Z * [new branch] eager_model_benchmarks -> origin/eager_model_benchmarks 2025-12-04T10:14:41.1760407Z * [new branch] embg/test_inductor_ci_control -> origin/embg/test_inductor_ci_control 2025-12-04T10:14:41.1761193Z * [new branch] embg/triton_l2_prefetch_128B -> origin/embg/triton_l2_prefetch_128B 2025-12-04T10:14:41.1761978Z * [new branch] embg/triton_l2_prefetch_256B -> origin/embg/triton_l2_prefetch_256B 2025-12-04T10:14:41.1762614Z * [new branch] eqy-patch-1 -> origin/eqy-patch-1 2025-12-04T10:14:41.1763165Z * [new branch] eqy-patch-2 -> origin/eqy-patch-2 2025-12-04T10:14:41.1763706Z * [new branch] eqy-patch-3 -> origin/eqy-patch-3 2025-12-04T10:14:41.1764245Z * [new branch] eqy-patch-4 -> origin/eqy-patch-4 2025-12-04T10:14:41.1764777Z * [new branch] eqy-patch-5 -> origin/eqy-patch-5 2025-12-04T10:14:41.1765316Z * [new branch] eqy-patch-6 -> origin/eqy-patch-6 2025-12-04T10:14:41.1765898Z * [new branch] exclamaforte/amd-ma -> origin/exclamaforte/amd-ma 2025-12-04T10:14:41.1766655Z * [new branch] exclamaforte/combo-kernels-perf-run -> origin/exclamaforte/combo-kernels-perf-run 2025-12-04T10:14:41.1767490Z * [new branch] exclamaforte/do_bench_refactor -> origin/exclamaforte/do_bench_refactor 2025-12-04T10:14:41.1768307Z * [new branch] exclamaforte/enable-mem-dep-fusion -> origin/exclamaforte/enable-mem-dep-fusion 2025-12-04T10:14:41.1769230Z * [new branch] exclamaforte/fix-exhaustive-autotuning -> origin/exclamaforte/fix-exhaustive-autotuning 2025-12-04T10:14:41.1770180Z * [new branch] exclamaforte/fix-trace-parsing-fx-svg -> origin/exclamaforte/fix-trace-parsing-fx-svg 2025-12-04T10:14:41.1771232Z * [new branch] exclamaforte/force-pointwise-cat-perf-run -> origin/exclamaforte/force-pointwise-cat-perf-run 2025-12-04T10:14:41.1772083Z * [new branch] exclamaforte/fusion-data -> origin/exclamaforte/fusion-data 2025-12-04T10:14:41.1772836Z * [new branch] exclamaforte/gemm-benchmark-run -> origin/exclamaforte/gemm-benchmark-run 2025-12-04T10:14:41.1773634Z * [new branch] exclamaforte/gemm-export-model -> origin/exclamaforte/gemm-export-model 2025-12-04T10:14:41.1774350Z * [new branch] exclamaforte/gemm-model -> origin/exclamaforte/gemm-model 2025-12-04T10:14:41.1775217Z * [new branch] exclamaforte/gemm-model-all-data-collection -> origin/exclamaforte/gemm-model-all-data-collection 2025-12-04T10:14:41.1776081Z * [new branch] exclamaforte/gemm-to-amd -> origin/exclamaforte/gemm-to-amd 2025-12-04T10:14:41.1776783Z * [new branch] exclamaforte/just-gemm-model -> origin/exclamaforte/just-gemm-model 2025-12-04T10:14:41.1777768Z * [new branch] exclamaforte/just-gemm-model-no-refactor -> origin/exclamaforte/just-gemm-model-no-refactor 2025-12-04T10:14:41.1778657Z * [new branch] exclamaforte/profile-diff-algo -> origin/exclamaforte/profile-diff-algo 2025-12-04T10:14:41.1779504Z * [new branch] exclamaforte/profiler-visualization -> origin/exclamaforte/profiler-visualization 2025-12-04T10:14:41.1780385Z * [new branch] exclamaforte/test_cpp_wrapper_mode -> origin/exclamaforte/test_cpp_wrapper_mode 2025-12-04T10:14:41.1781311Z * [new branch] exclamaforte/update-autotune-configs -> origin/exclamaforte/update-autotune-configs 2025-12-04T10:14:41.1782257Z * [new branch] exclamaforte/update-autotune-configs-2 -> origin/exclamaforte/update-autotune-configs-2 2025-12-04T10:14:41.1782994Z * [new branch] exec -> origin/exec 2025-12-04T10:14:41.1783565Z * [new branch] experimental-mosaic -> origin/experimental-mosaic 2025-12-04T10:14:41.1784184Z * [new branch] export-D61047529 -> origin/export-D61047529 2025-12-04T10:14:41.1784891Z * [new branch] export-D71412006 -> origin/export-D71412006 2025-12-04T10:14:41.1785457Z * [new branch] export-D73042989 -> origin/export-D73042989 2025-12-04T10:14:41.1786107Z * [new branch] export-D78957093 -> origin/export-D78957093 2025-12-04T10:14:41.1786671Z * [new branch] export-D78996107 -> origin/export-D78996107 2025-12-04T10:14:41.1787220Z * [new branch] export-D80823877 -> origin/export-D80823877 2025-12-04T10:14:41.1787771Z * [new branch] export-D80958642 -> origin/export-D80958642 2025-12-04T10:14:41.1788324Z * [new branch] export-D81054193 -> origin/export-D81054193 2025-12-04T10:14:41.1788880Z * [new branch] export-D81204584 -> origin/export-D81204584 2025-12-04T10:14:41.1789443Z * [new branch] export-D81429090 -> origin/export-D81429090 2025-12-04T10:14:41.1790016Z * [new branch] export-D82250826 -> origin/export-D82250826 2025-12-04T10:14:41.1790566Z * [new branch] export-D82253817 -> origin/export-D82253817 2025-12-04T10:14:41.1791183Z * [new branch] export-D83541846 -> origin/export-D83541846 2025-12-04T10:14:41.1791740Z * [new branch] export-D83627170 -> origin/export-D83627170 2025-12-04T10:14:41.1792296Z * [new branch] export-D83766701 -> origin/export-D83766701 2025-12-04T10:14:41.1792853Z * [new branch] export-D83768878 -> origin/export-D83768878 2025-12-04T10:14:41.1793411Z * [new branch] export-D83769447 -> origin/export-D83769447 2025-12-04T10:14:41.1793962Z * [new branch] export-D84089824 -> origin/export-D84089824 2025-12-04T10:14:41.1794529Z * [new branch] export-D84213020 -> origin/export-D84213020 2025-12-04T10:14:41.1795090Z * [new branch] export-D84373821 -> origin/export-D84373821 2025-12-04T10:14:41.1795643Z * [new branch] export-D84612194 -> origin/export-D84612194 2025-12-04T10:14:41.1796204Z * [new branch] export-D84890985 -> origin/export-D84890985 2025-12-04T10:14:41.1796757Z * [new branch] export-D85122326 -> origin/export-D85122326 2025-12-04T10:14:41.1797321Z * [new branch] export-D86256198 -> origin/export-D86256198 2025-12-04T10:14:41.1797879Z * [new branch] export-D86460608 -> origin/export-D86460608 2025-12-04T10:14:41.1798431Z * [new branch] export-D86474796 -> origin/export-D86474796 2025-12-04T10:14:41.1798987Z * [new branch] export-D86712396 -> origin/export-D86712396 2025-12-04T10:14:41.1799650Z * [new branch] export-D87022129 -> origin/export-D87022129 2025-12-04T10:14:41.1800202Z * [new branch] export-D87838959 -> origin/export-D87838959 2025-12-04T10:14:41.1800825Z * [new branch] export-D88319437 -> origin/export-D88319437 2025-12-04T10:14:41.1801549Z * [new branch] exported-model-train-idempotent -> origin/exported-model-train-idempotent 2025-12-04T10:14:41.1802304Z * [new branch] ezyang-titan-october -> origin/ezyang-titan-october 2025-12-04T10:14:41.1802949Z * [new branch] ezyang-titan-october2 -> origin/ezyang-titan-october2 2025-12-04T10:14:41.1803549Z * [new branch] ezyang-war -> origin/ezyang-war 2025-12-04T10:14:41.1804193Z * [new branch] ezyang/wip-aot-descriptors -> origin/ezyang/wip-aot-descriptors 2025-12-04T10:14:41.1804831Z * [new branch] fa_u8_brgemm -> origin/fa_u8_brgemm 2025-12-04T10:14:41.1805447Z * [new branch] fadeputr/sequence_fbgemm -> origin/fadeputr/sequence_fbgemm 2025-12-04T10:14:41.1806070Z * [new branch] fastmath_baseline -> origin/fastmath_baseline 2025-12-04T10:14:41.1806643Z * [new branch] fbcode/warm -> origin/fbcode/warm 2025-12-04T10:14:41.1807170Z * [new branch] fca -> origin/fca 2025-12-04T10:14:41.1807771Z * [new branch] fca2_ca5984c -> origin/fca2_ca5984c 2025-12-04T10:14:41.1808291Z * [new branch] fca5 -> origin/fca5 2025-12-04T10:14:41.1808863Z * [new branch] feature/justknobs-cpp -> origin/feature/justknobs-cpp 2025-12-04T10:14:41.1809509Z * [new branch] feature/numa-forkserver -> origin/feature/numa-forkserver 2025-12-04T10:14:41.1810134Z * [new branch] ffast_math_baseline -> origin/ffast_math_baseline 2025-12-04T10:14:41.1810767Z * [new branch] ffast_math_target -> origin/ffast_math_target 2025-12-04T10:14:41.1811360Z * [new branch] findhao/base_commit -> origin/findhao/base_commit 2025-12-04T10:14:41.1811958Z * [new branch] findhao/base_commit1 -> origin/findhao/base_commit1 2025-12-04T10:14:41.1812568Z * [new branch] findhao/multistream2 -> origin/findhao/multistream2 2025-12-04T10:14:41.1813183Z * [new branch] findhao/multistream5 -> origin/findhao/multistream5 2025-12-04T10:14:41.1813801Z * [new branch] findhao/multistream6 -> origin/findhao/multistream6 2025-12-04T10:14:41.1814426Z * [new branch] findhao/operatorbench3 -> origin/findhao/operatorbench3 2025-12-04T10:14:41.1815077Z * [new branch] findhao/operatorbench5 -> origin/findhao/operatorbench5 2025-12-04T10:14:41.1815717Z * [new branch] findhao/tritonparse -> origin/findhao/tritonparse 2025-12-04T10:14:41.1816411Z * [new branch] fix-ck-gemm-template-format -> origin/fix-ck-gemm-template-format 2025-12-04T10:14:41.1817103Z * [new branch] fix-config-ignore -> origin/fix-config-ignore 2025-12-04T10:14:41.1817686Z * [new branch] fix-dict-guard -> origin/fix-dict-guard 2025-12-04T10:14:41.1818253Z * [new branch] fix_addmm_issue -> origin/fix_addmm_issue 2025-12-04T10:14:41.1818891Z * [new branch] fix_amd_missing_cluster_dims -> origin/fix_amd_missing_cluster_dims 2025-12-04T10:14:41.1819542Z * [new branch] fix_bench_bwd_pass -> origin/fix_bench_bwd_pass 2025-12-04T10:14:41.1820155Z * [new branch] fix_mem_profiler_config -> origin/fix_mem_profiler_config 2025-12-04T10:14:41.1820812Z * [new branch] fix_nvrtc_discovery -> origin/fix_nvrtc_discovery 2025-12-04T10:14:41.1821384Z * [new branch] fix_op_runner -> origin/fix_op_runner 2025-12-04T10:14:41.1822020Z * [new branch] fix_ubn_159469 -> origin/fix_ubn_159469 2025-12-04T10:14:41.1822574Z * [new branch] fixes-triage -> origin/fixes-triage 2025-12-04T10:14:41.1823126Z * [new branch] fixflashinfer -> origin/fixflashinfer 2025-12-04T10:14:41.1823705Z * [new branch] flash_decoding_cpu -> origin/flash_decoding_cpu 2025-12-04T10:14:41.1824285Z * [new branch] flex-flash -> origin/flex-flash 2025-12-04T10:14:41.1824933Z * [new branch] flex_attention_functorch_grad -> origin/flex_attention_functorch_grad 2025-12-04T10:14:41.1825574Z * [new branch] flex_flash -> origin/flex_flash 2025-12-04T10:14:41.1826225Z * [new branch] fmassa/fix_memeff_sharding_rule -> origin/fmassa/fix_memeff_sharding_rule 2025-12-04T10:14:41.1827033Z * [new branch] fmassa/tests_comm_compute_scheduler -> origin/fmassa/tests_comm_compute_scheduler 2025-12-04T10:14:41.1827740Z * [new branch] forkserver_fix -> origin/forkserver_fix 2025-12-04T10:14:41.1828311Z * [new branch] fsdp2_trace_rules -> origin/fsdp2_trace_rules 2025-12-04T10:14:41.1828871Z * [new branch] fx_cpp -> origin/fx_cpp 2025-12-04T10:14:41.1829394Z * [new branch] fy/fix-win -> origin/fy/fix-win 2025-12-04T10:14:41.1830022Z * [new branch] galv-patch-1 -> origin/galv-patch-1 2025-12-04T10:14:41.1830866Z * [new branch] galv/cudagraphs-conditional-nodes-4 -> origin/galv/cudagraphs-conditional-nodes-4 2025-12-04T10:14:41.1831702Z * [new branch] georgehong/cmakelists-patch -> origin/georgehong/cmakelists-patch 2025-12-04T10:14:41.1832381Z * [new branch] gh/AlnisM/1/base -> origin/gh/AlnisM/1/base 2025-12-04T10:14:41.1833270Z * [new branch] gh/AlnisM/1/head -> origin/gh/AlnisM/1/head 2025-12-04T10:14:41.1833878Z * [new branch] gh/EikanWang/67/base -> origin/gh/EikanWang/67/base 2025-12-04T10:14:41.1834493Z * [new branch] gh/EikanWang/67/head -> origin/gh/EikanWang/67/head 2025-12-04T10:14:41.1835105Z * [new branch] gh/Gasoonjia/1/base -> origin/gh/Gasoonjia/1/base 2025-12-04T10:14:41.1835713Z * [new branch] gh/Gasoonjia/1/head -> origin/gh/Gasoonjia/1/head 2025-12-04T10:14:41.1836315Z * [new branch] gh/H-Huang/131/base -> origin/gh/H-Huang/131/base 2025-12-04T10:14:41.1836897Z * [new branch] gh/H-Huang/131/head -> origin/gh/H-Huang/131/head 2025-12-04T10:14:41.1837476Z * [new branch] gh/H-Huang/131/orig -> origin/gh/H-Huang/131/orig 2025-12-04T10:14:41.1838064Z * [new branch] gh/H-Huang/132/base -> origin/gh/H-Huang/132/base 2025-12-04T10:14:41.1838640Z * [new branch] gh/H-Huang/132/head -> origin/gh/H-Huang/132/head 2025-12-04T10:14:41.1839222Z * [new branch] gh/H-Huang/132/orig -> origin/gh/H-Huang/132/orig 2025-12-04T10:14:41.1839793Z * [new branch] gh/H-Huang/180/base -> origin/gh/H-Huang/180/base 2025-12-04T10:14:41.1840378Z * [new branch] gh/H-Huang/180/head -> origin/gh/H-Huang/180/head 2025-12-04T10:14:41.1841023Z * [new branch] gh/H-Huang/180/orig -> origin/gh/H-Huang/180/orig 2025-12-04T10:14:41.1841593Z * [new branch] gh/H-Huang/182/base -> origin/gh/H-Huang/182/base 2025-12-04T10:14:41.1842171Z * [new branch] gh/H-Huang/182/head -> origin/gh/H-Huang/182/head 2025-12-04T10:14:41.1842757Z * [new branch] gh/H-Huang/182/orig -> origin/gh/H-Huang/182/orig 2025-12-04T10:14:41.1843332Z * [new branch] gh/H-Huang/226/base -> origin/gh/H-Huang/226/base 2025-12-04T10:14:41.1843913Z * [new branch] gh/H-Huang/226/head -> origin/gh/H-Huang/226/head 2025-12-04T10:14:41.1844606Z * [new branch] gh/H-Huang/226/orig -> origin/gh/H-Huang/226/orig 2025-12-04T10:14:41.1845182Z * [new branch] gh/H-Huang/228/base -> origin/gh/H-Huang/228/base 2025-12-04T10:14:41.1845756Z * [new branch] gh/H-Huang/228/head -> origin/gh/H-Huang/228/head 2025-12-04T10:14:41.1846337Z * [new branch] gh/H-Huang/228/orig -> origin/gh/H-Huang/228/orig 2025-12-04T10:14:41.1846963Z * [new branch] gh/IvanKobzarev/150/base -> origin/gh/IvanKobzarev/150/base 2025-12-04T10:14:41.1847637Z * [new branch] gh/IvanKobzarev/150/head -> origin/gh/IvanKobzarev/150/head 2025-12-04T10:14:41.1848297Z * [new branch] gh/IvanKobzarev/150/orig -> origin/gh/IvanKobzarev/150/orig 2025-12-04T10:14:41.1848950Z * [new branch] gh/IvanKobzarev/157/base -> origin/gh/IvanKobzarev/157/base 2025-12-04T10:14:41.1849597Z * [new branch] gh/IvanKobzarev/157/head -> origin/gh/IvanKobzarev/157/head 2025-12-04T10:14:41.1850256Z * [new branch] gh/IvanKobzarev/157/orig -> origin/gh/IvanKobzarev/157/orig 2025-12-04T10:14:41.1850952Z * [new branch] gh/IvanKobzarev/159/base -> origin/gh/IvanKobzarev/159/base 2025-12-04T10:14:41.1851597Z * [new branch] gh/IvanKobzarev/159/head -> origin/gh/IvanKobzarev/159/head 2025-12-04T10:14:41.1852343Z * [new branch] gh/IvanKobzarev/159/orig -> origin/gh/IvanKobzarev/159/orig 2025-12-04T10:14:41.1852988Z * [new branch] gh/IvanKobzarev/162/base -> origin/gh/IvanKobzarev/162/base 2025-12-04T10:14:41.1853633Z * [new branch] gh/IvanKobzarev/162/head -> origin/gh/IvanKobzarev/162/head 2025-12-04T10:14:41.1854278Z * [new branch] gh/IvanKobzarev/162/orig -> origin/gh/IvanKobzarev/162/orig 2025-12-04T10:14:41.1854934Z * [new branch] gh/IvanKobzarev/163/base -> origin/gh/IvanKobzarev/163/base 2025-12-04T10:14:41.1855591Z * [new branch] gh/IvanKobzarev/163/head -> origin/gh/IvanKobzarev/163/head 2025-12-04T10:14:41.1856236Z * [new branch] gh/IvanKobzarev/163/orig -> origin/gh/IvanKobzarev/163/orig 2025-12-04T10:14:41.1856893Z * [new branch] gh/IvanKobzarev/166/base -> origin/gh/IvanKobzarev/166/base 2025-12-04T10:14:41.1857547Z * [new branch] gh/IvanKobzarev/166/head -> origin/gh/IvanKobzarev/166/head 2025-12-04T10:14:41.1858190Z * [new branch] gh/IvanKobzarev/166/orig -> origin/gh/IvanKobzarev/166/orig 2025-12-04T10:14:41.1858837Z * [new branch] gh/IvanKobzarev/167/base -> origin/gh/IvanKobzarev/167/base 2025-12-04T10:14:41.1859491Z * [new branch] gh/IvanKobzarev/167/head -> origin/gh/IvanKobzarev/167/head 2025-12-04T10:14:41.1860138Z * [new branch] gh/IvanKobzarev/167/orig -> origin/gh/IvanKobzarev/167/orig 2025-12-04T10:14:41.1860867Z * [new branch] gh/IvanKobzarev/168/base -> origin/gh/IvanKobzarev/168/base 2025-12-04T10:14:41.1861527Z * [new branch] gh/IvanKobzarev/168/head -> origin/gh/IvanKobzarev/168/head 2025-12-04T10:14:41.1862182Z * [new branch] gh/IvanKobzarev/168/orig -> origin/gh/IvanKobzarev/168/orig 2025-12-04T10:14:41.1862833Z * [new branch] gh/IvanKobzarev/169/base -> origin/gh/IvanKobzarev/169/base 2025-12-04T10:14:41.1863488Z * [new branch] gh/IvanKobzarev/169/head -> origin/gh/IvanKobzarev/169/head 2025-12-04T10:14:41.1864145Z * [new branch] gh/IvanKobzarev/169/orig -> origin/gh/IvanKobzarev/169/orig 2025-12-04T10:14:41.1864795Z * [new branch] gh/IvanKobzarev/170/base -> origin/gh/IvanKobzarev/170/base 2025-12-04T10:14:41.1865452Z * [new branch] gh/IvanKobzarev/170/head -> origin/gh/IvanKobzarev/170/head 2025-12-04T10:14:41.1866099Z * [new branch] gh/IvanKobzarev/170/orig -> origin/gh/IvanKobzarev/170/orig 2025-12-04T10:14:41.1866878Z * [new branch] gh/IvanKobzarev/171/base -> origin/gh/IvanKobzarev/171/base 2025-12-04T10:14:41.1867524Z * [new branch] gh/IvanKobzarev/171/head -> origin/gh/IvanKobzarev/171/head 2025-12-04T10:14:41.1868166Z * [new branch] gh/IvanKobzarev/171/orig -> origin/gh/IvanKobzarev/171/orig 2025-12-04T10:14:41.1868823Z * [new branch] gh/IvanKobzarev/172/base -> origin/gh/IvanKobzarev/172/base 2025-12-04T10:14:41.1869469Z * [new branch] gh/IvanKobzarev/172/head -> origin/gh/IvanKobzarev/172/head 2025-12-04T10:14:41.1870114Z * [new branch] gh/IvanKobzarev/172/orig -> origin/gh/IvanKobzarev/172/orig 2025-12-04T10:14:41.1870824Z * [new branch] gh/IvanKobzarev/173/base -> origin/gh/IvanKobzarev/173/base 2025-12-04T10:14:41.1871470Z * [new branch] gh/IvanKobzarev/173/head -> origin/gh/IvanKobzarev/173/head 2025-12-04T10:14:41.1872112Z * [new branch] gh/IvanKobzarev/173/orig -> origin/gh/IvanKobzarev/173/orig 2025-12-04T10:14:41.1872767Z * [new branch] gh/IvanKobzarev/174/base -> origin/gh/IvanKobzarev/174/base 2025-12-04T10:14:41.1873422Z * [new branch] gh/IvanKobzarev/174/head -> origin/gh/IvanKobzarev/174/head 2025-12-04T10:14:41.1874066Z * [new branch] gh/IvanKobzarev/174/orig -> origin/gh/IvanKobzarev/174/orig 2025-12-04T10:14:41.1874806Z * [new branch] gh/IvanKobzarev/175/base -> origin/gh/IvanKobzarev/175/base 2025-12-04T10:14:41.1875457Z * [new branch] gh/IvanKobzarev/175/head -> origin/gh/IvanKobzarev/175/head 2025-12-04T10:14:41.1876103Z * [new branch] gh/IvanKobzarev/175/orig -> origin/gh/IvanKobzarev/175/orig 2025-12-04T10:14:41.1876748Z * [new branch] gh/IvanKobzarev/176/base -> origin/gh/IvanKobzarev/176/base 2025-12-04T10:14:41.1877393Z * [new branch] gh/IvanKobzarev/176/head -> origin/gh/IvanKobzarev/176/head 2025-12-04T10:14:41.1878055Z * [new branch] gh/IvanKobzarev/176/orig -> origin/gh/IvanKobzarev/176/orig 2025-12-04T10:14:41.1878701Z * [new branch] gh/IvanKobzarev/177/base -> origin/gh/IvanKobzarev/177/base 2025-12-04T10:14:41.1879346Z * [new branch] gh/IvanKobzarev/177/head -> origin/gh/IvanKobzarev/177/head 2025-12-04T10:14:41.1880010Z * [new branch] gh/IvanKobzarev/177/orig -> origin/gh/IvanKobzarev/177/orig 2025-12-04T10:14:41.1880724Z * [new branch] gh/IvanKobzarev/178/base -> origin/gh/IvanKobzarev/178/base 2025-12-04T10:14:41.1881373Z * [new branch] gh/IvanKobzarev/178/head -> origin/gh/IvanKobzarev/178/head 2025-12-04T10:14:41.1882021Z * [new branch] gh/IvanKobzarev/178/orig -> origin/gh/IvanKobzarev/178/orig 2025-12-04T10:14:41.1882674Z * [new branch] gh/IvanKobzarev/179/base -> origin/gh/IvanKobzarev/179/base 2025-12-04T10:14:41.1883315Z * [new branch] gh/IvanKobzarev/179/head -> origin/gh/IvanKobzarev/179/head 2025-12-04T10:14:41.1883967Z * [new branch] gh/IvanKobzarev/179/orig -> origin/gh/IvanKobzarev/179/orig 2025-12-04T10:14:41.1884611Z * [new branch] gh/IvanKobzarev/180/base -> origin/gh/IvanKobzarev/180/base 2025-12-04T10:14:41.1885261Z * [new branch] gh/IvanKobzarev/180/head -> origin/gh/IvanKobzarev/180/head 2025-12-04T10:14:41.1885912Z * [new branch] gh/IvanKobzarev/180/orig -> origin/gh/IvanKobzarev/180/orig 2025-12-04T10:14:41.1886558Z * [new branch] gh/IvanKobzarev/181/base -> origin/gh/IvanKobzarev/181/base 2025-12-04T10:14:41.1887209Z * [new branch] gh/IvanKobzarev/181/head -> origin/gh/IvanKobzarev/181/head 2025-12-04T10:14:41.1887859Z * [new branch] gh/IvanKobzarev/181/orig -> origin/gh/IvanKobzarev/181/orig 2025-12-04T10:14:41.1888508Z * [new branch] gh/IvanKobzarev/182/base -> origin/gh/IvanKobzarev/182/base 2025-12-04T10:14:41.1889248Z * [new branch] gh/IvanKobzarev/182/head -> origin/gh/IvanKobzarev/182/head 2025-12-04T10:14:41.1889897Z * [new branch] gh/IvanKobzarev/182/orig -> origin/gh/IvanKobzarev/182/orig 2025-12-04T10:14:41.1890540Z * [new branch] gh/IvanKobzarev/183/base -> origin/gh/IvanKobzarev/183/base 2025-12-04T10:14:41.1891259Z * [new branch] gh/IvanKobzarev/183/head -> origin/gh/IvanKobzarev/183/head 2025-12-04T10:14:41.1891912Z * [new branch] gh/IvanKobzarev/183/orig -> origin/gh/IvanKobzarev/183/orig 2025-12-04T10:14:41.1892568Z * [new branch] gh/IvanKobzarev/184/base -> origin/gh/IvanKobzarev/184/base 2025-12-04T10:14:41.1893219Z * [new branch] gh/IvanKobzarev/184/head -> origin/gh/IvanKobzarev/184/head 2025-12-04T10:14:41.1893868Z * [new branch] gh/IvanKobzarev/184/orig -> origin/gh/IvanKobzarev/184/orig 2025-12-04T10:14:41.1894535Z * [new branch] gh/NikhilAPatel/1/base -> origin/gh/NikhilAPatel/1/base 2025-12-04T10:14:41.1895188Z * [new branch] gh/NikhilAPatel/1/head -> origin/gh/NikhilAPatel/1/head 2025-12-04T10:14:41.1895827Z * [new branch] gh/NikhilAPatel/2/base -> origin/gh/NikhilAPatel/2/base 2025-12-04T10:14:41.1896467Z * [new branch] gh/NikhilAPatel/2/head -> origin/gh/NikhilAPatel/2/head 2025-12-04T10:14:41.1897190Z * [new branch] gh/NikhilAPatel/4/base -> origin/gh/NikhilAPatel/4/base 2025-12-04T10:14:41.1897827Z * [new branch] gh/NikhilAPatel/4/head -> origin/gh/NikhilAPatel/4/head 2025-12-04T10:14:41.1898457Z * [new branch] gh/NikhilAPatel/5/base -> origin/gh/NikhilAPatel/5/base 2025-12-04T10:14:41.1899093Z * [new branch] gh/NikhilAPatel/5/head -> origin/gh/NikhilAPatel/5/head 2025-12-04T10:14:41.1899718Z * [new branch] gh/NikhilAPatel/5/orig -> origin/gh/NikhilAPatel/5/orig 2025-12-04T10:14:41.1900324Z * [new branch] gh/PaliC/17/base -> origin/gh/PaliC/17/base 2025-12-04T10:14:41.1900967Z * [new branch] gh/PaliC/17/head -> origin/gh/PaliC/17/head 2025-12-04T10:14:41.1901546Z * [new branch] gh/PaliC/17/orig -> origin/gh/PaliC/17/orig 2025-12-04T10:14:41.1902108Z * [new branch] gh/PaliC/18/base -> origin/gh/PaliC/18/base 2025-12-04T10:14:41.1902676Z * [new branch] gh/PaliC/18/head -> origin/gh/PaliC/18/head 2025-12-04T10:14:41.1903243Z * [new branch] gh/PaliC/18/orig -> origin/gh/PaliC/18/orig 2025-12-04T10:14:41.1903799Z * [new branch] gh/PaliC/20/base -> origin/gh/PaliC/20/base 2025-12-04T10:14:41.1904356Z * [new branch] gh/PaliC/20/head -> origin/gh/PaliC/20/head 2025-12-04T10:14:41.1904914Z * [new branch] gh/PaliC/20/orig -> origin/gh/PaliC/20/orig 2025-12-04T10:14:41.1905480Z * [new branch] gh/PaliC/21/base -> origin/gh/PaliC/21/base 2025-12-04T10:14:41.1906045Z * [new branch] gh/PaliC/21/head -> origin/gh/PaliC/21/head 2025-12-04T10:14:41.1906608Z * [new branch] gh/PaliC/21/orig -> origin/gh/PaliC/21/orig 2025-12-04T10:14:41.1907166Z * [new branch] gh/PaliC/23/base -> origin/gh/PaliC/23/base 2025-12-04T10:14:41.1907742Z * [new branch] gh/PaliC/23/head -> origin/gh/PaliC/23/head 2025-12-04T10:14:41.1908303Z * [new branch] gh/PaliC/23/orig -> origin/gh/PaliC/23/orig 2025-12-04T10:14:41.1908859Z * [new branch] gh/PaliC/24/base -> origin/gh/PaliC/24/base 2025-12-04T10:14:41.1909420Z * [new branch] gh/PaliC/24/head -> origin/gh/PaliC/24/head 2025-12-04T10:14:41.1909983Z * [new branch] gh/PaliC/24/orig -> origin/gh/PaliC/24/orig 2025-12-04T10:14:41.1910541Z * [new branch] gh/PaliC/25/head -> origin/gh/PaliC/25/head 2025-12-04T10:14:41.1911246Z * [new branch] gh/PaliC/25/next -> origin/gh/PaliC/25/next 2025-12-04T10:14:41.1911804Z * [new branch] gh/PaliC/25/orig -> origin/gh/PaliC/25/orig 2025-12-04T10:14:41.1912375Z * [new branch] gh/PaliC/26/head -> origin/gh/PaliC/26/head 2025-12-04T10:14:41.1912933Z * [new branch] gh/PaliC/26/next -> origin/gh/PaliC/26/next 2025-12-04T10:14:41.1913494Z * [new branch] gh/PaliC/26/orig -> origin/gh/PaliC/26/orig 2025-12-04T10:14:41.1914056Z * [new branch] gh/PaliC/27/next -> origin/gh/PaliC/27/next 2025-12-04T10:14:41.1914615Z * [new branch] gh/PaliC/28/head -> origin/gh/PaliC/28/head 2025-12-04T10:14:41.1915170Z * [new branch] gh/PaliC/28/next -> origin/gh/PaliC/28/next 2025-12-04T10:14:41.1915736Z * [new branch] gh/PaliC/28/orig -> origin/gh/PaliC/28/orig 2025-12-04T10:14:41.1916300Z * [new branch] gh/PaliC/29/head -> origin/gh/PaliC/29/head 2025-12-04T10:14:41.1916855Z * [new branch] gh/PaliC/29/next -> origin/gh/PaliC/29/next 2025-12-04T10:14:41.1917413Z * [new branch] gh/PaliC/29/orig -> origin/gh/PaliC/29/orig 2025-12-04T10:14:41.1917972Z * [new branch] gh/PaliC/30/head -> origin/gh/PaliC/30/head 2025-12-04T10:14:41.1918610Z * [new branch] gh/PaliC/30/next -> origin/gh/PaliC/30/next 2025-12-04T10:14:41.1919176Z * [new branch] gh/PaliC/30/orig -> origin/gh/PaliC/30/orig 2025-12-04T10:14:41.1919737Z * [new branch] gh/PaliC/31/head -> origin/gh/PaliC/31/head 2025-12-04T10:14:41.1920292Z * [new branch] gh/PaliC/31/next -> origin/gh/PaliC/31/next 2025-12-04T10:14:41.1920916Z * [new branch] gh/PaliC/31/orig -> origin/gh/PaliC/31/orig 2025-12-04T10:14:41.1921518Z * [new branch] gh/PaulZhang12/25/base -> origin/gh/PaulZhang12/25/base 2025-12-04T10:14:41.1922157Z * [new branch] gh/PaulZhang12/25/head -> origin/gh/PaulZhang12/25/head 2025-12-04T10:14:41.1922783Z * [new branch] gh/PaulZhang12/25/orig -> origin/gh/PaulZhang12/25/orig 2025-12-04T10:14:41.1923403Z * [new branch] gh/PaulZhang12/28/base -> origin/gh/PaulZhang12/28/base 2025-12-04T10:14:41.1924032Z * [new branch] gh/PaulZhang12/28/head -> origin/gh/PaulZhang12/28/head 2025-12-04T10:14:41.1924654Z * [new branch] gh/PaulZhang12/28/orig -> origin/gh/PaulZhang12/28/orig 2025-12-04T10:14:41.1925268Z * [new branch] gh/PaulZhang12/31/base -> origin/gh/PaulZhang12/31/base 2025-12-04T10:14:41.1925883Z * [new branch] gh/PaulZhang12/31/head -> origin/gh/PaulZhang12/31/head 2025-12-04T10:14:41.1926502Z * [new branch] gh/PaulZhang12/31/orig -> origin/gh/PaulZhang12/31/orig 2025-12-04T10:14:41.1927124Z * [new branch] gh/PaulZhang12/37/base -> origin/gh/PaulZhang12/37/base 2025-12-04T10:14:41.1927745Z * [new branch] gh/PaulZhang12/37/head -> origin/gh/PaulZhang12/37/head 2025-12-04T10:14:41.1928374Z * [new branch] gh/PaulZhang12/37/orig -> origin/gh/PaulZhang12/37/orig 2025-12-04T10:14:41.1928993Z * [new branch] gh/PaulZhang12/40/base -> origin/gh/PaulZhang12/40/base 2025-12-04T10:14:41.1929617Z * [new branch] gh/PaulZhang12/40/head -> origin/gh/PaulZhang12/40/head 2025-12-04T10:14:41.1930238Z * [new branch] gh/PaulZhang12/40/orig -> origin/gh/PaulZhang12/40/orig 2025-12-04T10:14:41.1930898Z * [new branch] gh/PaulZhang12/42/base -> origin/gh/PaulZhang12/42/base 2025-12-04T10:14:41.1931513Z * [new branch] gh/PaulZhang12/42/head -> origin/gh/PaulZhang12/42/head 2025-12-04T10:14:41.1932132Z * [new branch] gh/PaulZhang12/43/base -> origin/gh/PaulZhang12/43/base 2025-12-04T10:14:41.1932841Z * [new branch] gh/PaulZhang12/43/head -> origin/gh/PaulZhang12/43/head 2025-12-04T10:14:41.1933772Z * [new branch] gh/PaulZhang12/43/orig -> origin/gh/PaulZhang12/43/orig 2025-12-04T10:14:41.1934407Z * [new branch] gh/PaulZhang12/44/base -> origin/gh/PaulZhang12/44/base 2025-12-04T10:14:41.1935041Z * [new branch] gh/PaulZhang12/44/head -> origin/gh/PaulZhang12/44/head 2025-12-04T10:14:41.1935676Z * [new branch] gh/PaulZhang12/45/base -> origin/gh/PaulZhang12/45/base 2025-12-04T10:14:41.1936300Z * [new branch] gh/PaulZhang12/45/head -> origin/gh/PaulZhang12/45/head 2025-12-04T10:14:41.1936919Z * [new branch] gh/PaulZhang12/45/orig -> origin/gh/PaulZhang12/45/orig 2025-12-04T10:14:41.1937534Z * [new branch] gh/PaulZhang12/46/base -> origin/gh/PaulZhang12/46/base 2025-12-04T10:14:41.1938148Z * [new branch] gh/PaulZhang12/46/head -> origin/gh/PaulZhang12/46/head 2025-12-04T10:14:41.1938899Z * [new branch] gh/PaulZhang12/46/orig -> origin/gh/PaulZhang12/46/orig 2025-12-04T10:14:41.1939619Z * [new branch] gh/PaulZhang12/47/base -> origin/gh/PaulZhang12/47/base 2025-12-04T10:14:41.1940770Z * [new branch] gh/PaulZhang12/47/head -> origin/gh/PaulZhang12/47/head 2025-12-04T10:14:41.1941599Z * [new branch] gh/PaulZhang12/47/orig -> origin/gh/PaulZhang12/47/orig 2025-12-04T10:14:41.1942280Z * [new branch] gh/PaulZhang12/48/base -> origin/gh/PaulZhang12/48/base 2025-12-04T10:14:41.1943056Z * [new branch] gh/PaulZhang12/48/head -> origin/gh/PaulZhang12/48/head 2025-12-04T10:14:41.1943776Z * [new branch] gh/PaulZhang12/48/orig -> origin/gh/PaulZhang12/48/orig 2025-12-04T10:14:41.1944463Z * [new branch] gh/SamGinzburg/11/base -> origin/gh/SamGinzburg/11/base 2025-12-04T10:14:41.1945233Z * [new branch] gh/SamGinzburg/11/head -> origin/gh/SamGinzburg/11/head 2025-12-04T10:14:41.1945981Z * [new branch] gh/SherlockNoMad/1/base -> origin/gh/SherlockNoMad/1/base 2025-12-04T10:14:41.1946690Z * [new branch] gh/SherlockNoMad/1/head -> origin/gh/SherlockNoMad/1/head 2025-12-04T10:14:41.1947491Z * [new branch] gh/SherlockNoMad/10/base -> origin/gh/SherlockNoMad/10/base 2025-12-04T10:14:41.1948444Z * [new branch] gh/SherlockNoMad/10/head -> origin/gh/SherlockNoMad/10/head 2025-12-04T10:14:41.1949160Z * [new branch] gh/SherlockNoMad/10/orig -> origin/gh/SherlockNoMad/10/orig 2025-12-04T10:14:41.1949953Z * [new branch] gh/SherlockNoMad/11/base -> origin/gh/SherlockNoMad/11/base 2025-12-04T10:14:41.1950777Z * [new branch] gh/SherlockNoMad/11/head -> origin/gh/SherlockNoMad/11/head 2025-12-04T10:14:41.1951507Z * [new branch] gh/SherlockNoMad/11/orig -> origin/gh/SherlockNoMad/11/orig 2025-12-04T10:14:41.1952478Z * [new branch] gh/SherlockNoMad/12/base -> origin/gh/SherlockNoMad/12/base 2025-12-04T10:14:41.1953254Z * [new branch] gh/SherlockNoMad/12/head -> origin/gh/SherlockNoMad/12/head 2025-12-04T10:14:41.1954054Z * [new branch] gh/SherlockNoMad/12/orig -> origin/gh/SherlockNoMad/12/orig 2025-12-04T10:14:41.1955050Z * [new branch] gh/SherlockNoMad/15/base -> origin/gh/SherlockNoMad/15/base 2025-12-04T10:14:41.2005054Z * [new branch] gh/SherlockNoMad/15/head -> origin/gh/SherlockNoMad/15/head 2025-12-04T10:14:41.2005608Z * [new branch] gh/SherlockNoMad/15/orig -> origin/gh/SherlockNoMad/15/orig 2025-12-04T10:14:41.2006057Z * [new branch] gh/SherlockNoMad/17/base -> origin/gh/SherlockNoMad/17/base 2025-12-04T10:14:41.2006507Z * [new branch] gh/SherlockNoMad/17/head -> origin/gh/SherlockNoMad/17/head 2025-12-04T10:14:41.2006943Z * [new branch] gh/SherlockNoMad/17/orig -> origin/gh/SherlockNoMad/17/orig 2025-12-04T10:14:41.2007504Z * [new branch] gh/SherlockNoMad/18/base -> origin/gh/SherlockNoMad/18/base 2025-12-04T10:14:41.2007920Z * [new branch] gh/SherlockNoMad/18/head -> origin/gh/SherlockNoMad/18/head 2025-12-04T10:14:41.2008343Z * [new branch] gh/SherlockNoMad/18/orig -> origin/gh/SherlockNoMad/18/orig 2025-12-04T10:14:41.2008759Z * [new branch] gh/SherlockNoMad/19/base -> origin/gh/SherlockNoMad/19/base 2025-12-04T10:14:41.2009176Z * [new branch] gh/SherlockNoMad/19/head -> origin/gh/SherlockNoMad/19/head 2025-12-04T10:14:41.2009609Z * [new branch] gh/SherlockNoMad/19/orig -> origin/gh/SherlockNoMad/19/orig 2025-12-04T10:14:41.2010037Z * [new branch] gh/SherlockNoMad/2/base -> origin/gh/SherlockNoMad/2/base 2025-12-04T10:14:41.2010445Z * [new branch] gh/SherlockNoMad/2/head -> origin/gh/SherlockNoMad/2/head 2025-12-04T10:14:41.2010931Z * [new branch] gh/SherlockNoMad/20/base -> origin/gh/SherlockNoMad/20/base 2025-12-04T10:14:41.2011347Z * [new branch] gh/SherlockNoMad/20/head -> origin/gh/SherlockNoMad/20/head 2025-12-04T10:14:41.2011756Z * [new branch] gh/SherlockNoMad/20/orig -> origin/gh/SherlockNoMad/20/orig 2025-12-04T10:14:41.2012217Z * [new branch] gh/SherlockNoMad/21/base -> origin/gh/SherlockNoMad/21/base 2025-12-04T10:14:41.2012629Z * [new branch] gh/SherlockNoMad/21/head -> origin/gh/SherlockNoMad/21/head 2025-12-04T10:14:41.2013037Z * [new branch] gh/SherlockNoMad/21/orig -> origin/gh/SherlockNoMad/21/orig 2025-12-04T10:14:41.2013441Z * [new branch] gh/SherlockNoMad/3/base -> origin/gh/SherlockNoMad/3/base 2025-12-04T10:14:41.2013842Z * [new branch] gh/SherlockNoMad/3/head -> origin/gh/SherlockNoMad/3/head 2025-12-04T10:14:41.2014244Z * [new branch] gh/SherlockNoMad/4/base -> origin/gh/SherlockNoMad/4/base 2025-12-04T10:14:41.2014650Z * [new branch] gh/SherlockNoMad/4/head -> origin/gh/SherlockNoMad/4/head 2025-12-04T10:14:41.2015057Z * [new branch] gh/SherlockNoMad/5/base -> origin/gh/SherlockNoMad/5/base 2025-12-04T10:14:41.2015458Z * [new branch] gh/SherlockNoMad/5/head -> origin/gh/SherlockNoMad/5/head 2025-12-04T10:14:41.2015892Z * [new branch] gh/Sidharth123-cpu/24/base -> origin/gh/Sidharth123-cpu/24/base 2025-12-04T10:14:41.2016343Z * [new branch] gh/Sidharth123-cpu/25/base -> origin/gh/Sidharth123-cpu/25/base 2025-12-04T10:14:41.2016778Z * [new branch] gh/Sidharth123-cpu/26/base -> origin/gh/Sidharth123-cpu/26/base 2025-12-04T10:14:41.2017203Z * [new branch] gh/Sidharth123-cpu/27/base -> origin/gh/Sidharth123-cpu/27/base 2025-12-04T10:14:41.2017623Z * [new branch] gh/StrongerXi/1/base -> origin/gh/StrongerXi/1/base 2025-12-04T10:14:41.2018025Z * [new branch] gh/StrongerXi/1/head -> origin/gh/StrongerXi/1/head 2025-12-04T10:14:41.2018423Z * [new branch] gh/StrongerXi/71/base -> origin/gh/StrongerXi/71/base 2025-12-04T10:14:41.2018824Z * [new branch] gh/StrongerXi/71/head -> origin/gh/StrongerXi/71/head 2025-12-04T10:14:41.2019219Z * [new branch] gh/StrongerXi/72/base -> origin/gh/StrongerXi/72/base 2025-12-04T10:14:41.2019614Z * [new branch] gh/StrongerXi/72/head -> origin/gh/StrongerXi/72/head 2025-12-04T10:14:41.2020010Z * [new branch] gh/StrongerXi/73/base -> origin/gh/StrongerXi/73/base 2025-12-04T10:14:41.2020400Z * [new branch] gh/StrongerXi/73/head -> origin/gh/StrongerXi/73/head 2025-12-04T10:14:41.2020854Z * [new branch] gh/StrongerXi/73/orig -> origin/gh/StrongerXi/73/orig 2025-12-04T10:14:41.2021245Z * [new branch] gh/XilunWu/160/base -> origin/gh/XilunWu/160/base 2025-12-04T10:14:41.2021680Z * [new branch] gh/XilunWu/160/head -> origin/gh/XilunWu/160/head 2025-12-04T10:14:41.2022060Z * [new branch] gh/XilunWu/160/orig -> origin/gh/XilunWu/160/orig 2025-12-04T10:14:41.2022443Z * [new branch] gh/XilunWu/163/base -> origin/gh/XilunWu/163/base 2025-12-04T10:14:41.2022813Z * [new branch] gh/XilunWu/163/head -> origin/gh/XilunWu/163/head 2025-12-04T10:14:41.2023194Z * [new branch] gh/XilunWu/163/orig -> origin/gh/XilunWu/163/orig 2025-12-04T10:14:41.2023569Z * [new branch] gh/XilunWu/168/base -> origin/gh/XilunWu/168/base 2025-12-04T10:14:41.2023940Z * [new branch] gh/XilunWu/168/head -> origin/gh/XilunWu/168/head 2025-12-04T10:14:41.2024314Z * [new branch] gh/XilunWu/168/orig -> origin/gh/XilunWu/168/orig 2025-12-04T10:14:41.2024691Z * [new branch] gh/XilunWu/169/base -> origin/gh/XilunWu/169/base 2025-12-04T10:14:41.2025058Z * [new branch] gh/XilunWu/169/head -> origin/gh/XilunWu/169/head 2025-12-04T10:14:41.2025429Z * [new branch] gh/XilunWu/169/orig -> origin/gh/XilunWu/169/orig 2025-12-04T10:14:41.2025798Z * [new branch] gh/XilunWu/170/base -> origin/gh/XilunWu/170/base 2025-12-04T10:14:41.2026166Z * [new branch] gh/XilunWu/170/head -> origin/gh/XilunWu/170/head 2025-12-04T10:14:41.2026597Z * [new branch] gh/XilunWu/170/orig -> origin/gh/XilunWu/170/orig 2025-12-04T10:14:41.2026960Z * [new branch] gh/XilunWu/171/base -> origin/gh/XilunWu/171/base 2025-12-04T10:14:41.2027326Z * [new branch] gh/XilunWu/171/head -> origin/gh/XilunWu/171/head 2025-12-04T10:14:41.2027690Z * [new branch] gh/XilunWu/171/orig -> origin/gh/XilunWu/171/orig 2025-12-04T10:14:41.2028056Z * [new branch] gh/XilunWu/173/base -> origin/gh/XilunWu/173/base 2025-12-04T10:14:41.2028430Z * [new branch] gh/XilunWu/173/head -> origin/gh/XilunWu/173/head 2025-12-04T10:14:41.2028794Z * [new branch] gh/XilunWu/173/orig -> origin/gh/XilunWu/173/orig 2025-12-04T10:14:41.2029163Z * [new branch] gh/XilunWu/175/base -> origin/gh/XilunWu/175/base 2025-12-04T10:14:41.2029530Z * [new branch] gh/XilunWu/175/head -> origin/gh/XilunWu/175/head 2025-12-04T10:14:41.2029895Z * [new branch] gh/XilunWu/175/orig -> origin/gh/XilunWu/175/orig 2025-12-04T10:14:41.2030261Z * [new branch] gh/XilunWu/176/base -> origin/gh/XilunWu/176/base 2025-12-04T10:14:41.2030669Z * [new branch] gh/XilunWu/176/head -> origin/gh/XilunWu/176/head 2025-12-04T10:14:41.2031035Z * [new branch] gh/XilunWu/176/orig -> origin/gh/XilunWu/176/orig 2025-12-04T10:14:41.2031418Z * [new branch] gh/XuehaiPan/14/base -> origin/gh/XuehaiPan/14/base 2025-12-04T10:14:41.2031806Z * [new branch] gh/XuehaiPan/14/head -> origin/gh/XuehaiPan/14/head 2025-12-04T10:14:41.2032180Z * [new branch] gh/XuehaiPan/14/orig -> origin/gh/XuehaiPan/14/orig 2025-12-04T10:14:41.2032566Z * [new branch] gh/XuehaiPan/179/base -> origin/gh/XuehaiPan/179/base 2025-12-04T10:14:41.2032960Z * [new branch] gh/XuehaiPan/179/head -> origin/gh/XuehaiPan/179/head 2025-12-04T10:14:41.2033342Z * [new branch] gh/XuehaiPan/179/orig -> origin/gh/XuehaiPan/179/orig 2025-12-04T10:14:41.2033725Z * [new branch] gh/XuehaiPan/249/base -> origin/gh/XuehaiPan/249/base 2025-12-04T10:14:41.2034105Z * [new branch] gh/XuehaiPan/249/head -> origin/gh/XuehaiPan/249/head 2025-12-04T10:14:41.2034481Z * [new branch] gh/XuehaiPan/249/orig -> origin/gh/XuehaiPan/249/orig 2025-12-04T10:14:41.2034862Z * [new branch] gh/XuehaiPan/253/base -> origin/gh/XuehaiPan/253/base 2025-12-04T10:14:41.2035317Z * [new branch] gh/XuehaiPan/253/head -> origin/gh/XuehaiPan/253/head 2025-12-04T10:14:41.2035696Z * [new branch] gh/XuehaiPan/253/orig -> origin/gh/XuehaiPan/253/orig 2025-12-04T10:14:41.2036077Z * [new branch] gh/XuehaiPan/254/base -> origin/gh/XuehaiPan/254/base 2025-12-04T10:14:41.2036463Z * [new branch] gh/XuehaiPan/254/head -> origin/gh/XuehaiPan/254/head 2025-12-04T10:14:41.2036847Z * [new branch] gh/XuehaiPan/254/orig -> origin/gh/XuehaiPan/254/orig 2025-12-04T10:14:41.2037227Z * [new branch] gh/XuehaiPan/255/base -> origin/gh/XuehaiPan/255/base 2025-12-04T10:14:41.2037607Z * [new branch] gh/XuehaiPan/255/head -> origin/gh/XuehaiPan/255/head 2025-12-04T10:14:41.2037992Z * [new branch] gh/XuehaiPan/255/orig -> origin/gh/XuehaiPan/255/orig 2025-12-04T10:14:41.2038375Z * [new branch] gh/XuehaiPan/271/base -> origin/gh/XuehaiPan/271/base 2025-12-04T10:14:41.2038758Z * [new branch] gh/XuehaiPan/271/head -> origin/gh/XuehaiPan/271/head 2025-12-04T10:14:41.2039139Z * [new branch] gh/XuehaiPan/271/orig -> origin/gh/XuehaiPan/271/orig 2025-12-04T10:14:41.2039518Z * [new branch] gh/XuehaiPan/343/base -> origin/gh/XuehaiPan/343/base 2025-12-04T10:14:41.2039976Z * [new branch] gh/XuehaiPan/343/head -> origin/gh/XuehaiPan/343/head 2025-12-04T10:14:41.2040360Z * [new branch] gh/XuehaiPan/343/orig -> origin/gh/XuehaiPan/343/orig 2025-12-04T10:14:41.2040798Z * [new branch] gh/XuehaiPan/347/base -> origin/gh/XuehaiPan/347/base 2025-12-04T10:14:41.2041178Z * [new branch] gh/XuehaiPan/347/head -> origin/gh/XuehaiPan/347/head 2025-12-04T10:14:41.2041567Z * [new branch] gh/XuehaiPan/347/orig -> origin/gh/XuehaiPan/347/orig 2025-12-04T10:14:41.2041952Z * [new branch] gh/XuehaiPan/348/base -> origin/gh/XuehaiPan/348/base 2025-12-04T10:14:41.2042335Z * [new branch] gh/XuehaiPan/348/head -> origin/gh/XuehaiPan/348/head 2025-12-04T10:14:41.2042716Z * [new branch] gh/XuehaiPan/348/orig -> origin/gh/XuehaiPan/348/orig 2025-12-04T10:14:41.2043102Z * [new branch] gh/XuehaiPan/350/base -> origin/gh/XuehaiPan/350/base 2025-12-04T10:14:41.2043483Z * [new branch] gh/XuehaiPan/350/head -> origin/gh/XuehaiPan/350/head 2025-12-04T10:14:41.2043868Z * [new branch] gh/XuehaiPan/350/orig -> origin/gh/XuehaiPan/350/orig 2025-12-04T10:14:41.2044251Z * [new branch] gh/XuehaiPan/365/base -> origin/gh/XuehaiPan/365/base 2025-12-04T10:14:41.2044635Z * [new branch] gh/XuehaiPan/365/head -> origin/gh/XuehaiPan/365/head 2025-12-04T10:14:41.2045020Z * [new branch] gh/XuehaiPan/365/orig -> origin/gh/XuehaiPan/365/orig 2025-12-04T10:14:41.2045403Z * [new branch] gh/XuehaiPan/366/base -> origin/gh/XuehaiPan/366/base 2025-12-04T10:14:41.2045786Z * [new branch] gh/XuehaiPan/366/head -> origin/gh/XuehaiPan/366/head 2025-12-04T10:14:41.2046166Z * [new branch] gh/XuehaiPan/370/base -> origin/gh/XuehaiPan/370/base 2025-12-04T10:14:41.2046542Z * [new branch] gh/XuehaiPan/370/head -> origin/gh/XuehaiPan/370/head 2025-12-04T10:14:41.2046931Z * [new branch] gh/XuehaiPan/370/orig -> origin/gh/XuehaiPan/370/orig 2025-12-04T10:14:41.2047314Z * [new branch] gh/XuehaiPan/390/base -> origin/gh/XuehaiPan/390/base 2025-12-04T10:14:41.2047694Z * [new branch] gh/XuehaiPan/390/head -> origin/gh/XuehaiPan/390/head 2025-12-04T10:14:41.2048080Z * [new branch] gh/XuehaiPan/390/orig -> origin/gh/XuehaiPan/390/orig 2025-12-04T10:14:41.2048463Z * [new branch] gh/XuehaiPan/391/base -> origin/gh/XuehaiPan/391/base 2025-12-04T10:14:41.2048904Z * [new branch] gh/XuehaiPan/391/head -> origin/gh/XuehaiPan/391/head 2025-12-04T10:14:41.2049289Z * [new branch] gh/XuehaiPan/391/orig -> origin/gh/XuehaiPan/391/orig 2025-12-04T10:14:41.2049676Z * [new branch] gh/XuehaiPan/392/base -> origin/gh/XuehaiPan/392/base 2025-12-04T10:14:41.2050055Z * [new branch] gh/XuehaiPan/392/head -> origin/gh/XuehaiPan/392/head 2025-12-04T10:14:41.2050442Z * [new branch] gh/XuehaiPan/392/orig -> origin/gh/XuehaiPan/392/orig 2025-12-04T10:14:41.2050870Z * [new branch] gh/XuehaiPan/394/base -> origin/gh/XuehaiPan/394/base 2025-12-04T10:14:41.2051249Z * [new branch] gh/XuehaiPan/394/head -> origin/gh/XuehaiPan/394/head 2025-12-04T10:14:41.2051631Z * [new branch] gh/XuehaiPan/394/orig -> origin/gh/XuehaiPan/394/orig 2025-12-04T10:14:41.2052012Z * [new branch] gh/XuehaiPan/397/base -> origin/gh/XuehaiPan/397/base 2025-12-04T10:14:41.2052399Z * [new branch] gh/XuehaiPan/397/head -> origin/gh/XuehaiPan/397/head 2025-12-04T10:14:41.2052782Z * [new branch] gh/XuehaiPan/397/orig -> origin/gh/XuehaiPan/397/orig 2025-12-04T10:14:41.2053166Z * [new branch] gh/XuehaiPan/398/base -> origin/gh/XuehaiPan/398/base 2025-12-04T10:14:41.2053598Z * [new branch] gh/XuehaiPan/398/head -> origin/gh/XuehaiPan/398/head 2025-12-04T10:14:41.2053986Z * [new branch] gh/XuehaiPan/398/orig -> origin/gh/XuehaiPan/398/orig 2025-12-04T10:14:41.2054367Z * [new branch] gh/XuehaiPan/399/base -> origin/gh/XuehaiPan/399/base 2025-12-04T10:14:41.2054750Z * [new branch] gh/XuehaiPan/399/head -> origin/gh/XuehaiPan/399/head 2025-12-04T10:14:41.2055132Z * [new branch] gh/XuehaiPan/399/orig -> origin/gh/XuehaiPan/399/orig 2025-12-04T10:14:41.2055511Z * [new branch] gh/XuehaiPan/400/base -> origin/gh/XuehaiPan/400/base 2025-12-04T10:14:41.2055904Z * [new branch] gh/XuehaiPan/400/head -> origin/gh/XuehaiPan/400/head 2025-12-04T10:14:41.2056287Z * [new branch] gh/XuehaiPan/400/orig -> origin/gh/XuehaiPan/400/orig 2025-12-04T10:14:41.2056684Z * [new branch] gh/ZhiweiYan-96/39/base -> origin/gh/ZhiweiYan-96/39/base 2025-12-04T10:14:41.2057086Z * [new branch] gh/ZhiweiYan-96/39/head -> origin/gh/ZhiweiYan-96/39/head 2025-12-04T10:14:41.2057481Z * [new branch] gh/ZhiweiYan-96/39/orig -> origin/gh/ZhiweiYan-96/39/orig 2025-12-04T10:14:41.2057867Z * [new branch] gh/ZhiweiYan-96/44/base -> origin/gh/ZhiweiYan-96/44/base 2025-12-04T10:14:41.2058254Z * [new branch] gh/ZhiweiYan-96/44/head -> origin/gh/ZhiweiYan-96/44/head 2025-12-04T10:14:41.2058638Z * [new branch] gh/ZhiweiYan-96/45/base -> origin/gh/ZhiweiYan-96/45/base 2025-12-04T10:14:41.2059020Z * [new branch] gh/ZhiweiYan-96/45/head -> origin/gh/ZhiweiYan-96/45/head 2025-12-04T10:14:41.2059412Z * [new branch] gh/ZhiweiYan-96/49/base -> origin/gh/ZhiweiYan-96/49/base 2025-12-04T10:14:41.2059795Z * [new branch] gh/ZhiweiYan-96/49/head -> origin/gh/ZhiweiYan-96/49/head 2025-12-04T10:14:41.2060179Z * [new branch] gh/ZhiweiYan-96/62/base -> origin/gh/ZhiweiYan-96/62/base 2025-12-04T10:14:41.2060569Z * [new branch] gh/ZhiweiYan-96/62/head -> origin/gh/ZhiweiYan-96/62/head 2025-12-04T10:14:41.2060995Z * [new branch] gh/ZhiweiYan-96/66/base -> origin/gh/ZhiweiYan-96/66/base 2025-12-04T10:14:41.2061375Z * [new branch] gh/ZhiweiYan-96/66/head -> origin/gh/ZhiweiYan-96/66/head 2025-12-04T10:14:41.2061757Z * [new branch] gh/ZhiweiYan-96/67/base -> origin/gh/ZhiweiYan-96/67/base 2025-12-04T10:14:41.2062141Z * [new branch] gh/ZhiweiYan-96/67/head -> origin/gh/ZhiweiYan-96/67/head 2025-12-04T10:14:41.2062593Z * [new branch] gh/ZhiweiYan-96/68/base -> origin/gh/ZhiweiYan-96/68/base 2025-12-04T10:14:41.2062978Z * [new branch] gh/ZhiweiYan-96/68/head -> origin/gh/ZhiweiYan-96/68/head 2025-12-04T10:14:41.2063358Z * [new branch] gh/ZhiweiYan-96/68/orig -> origin/gh/ZhiweiYan-96/68/orig 2025-12-04T10:14:41.2063742Z * [new branch] gh/aakhundov/1/base -> origin/gh/aakhundov/1/base 2025-12-04T10:14:41.2064124Z * [new branch] gh/aakhundov/1/head -> origin/gh/aakhundov/1/head 2025-12-04T10:14:41.2064493Z * [new branch] gh/aakhundov/2/base -> origin/gh/aakhundov/2/base 2025-12-04T10:14:41.2064865Z * [new branch] gh/aakhundov/2/head -> origin/gh/aakhundov/2/head 2025-12-04T10:14:41.2065244Z * [new branch] gh/aditew01/openblas -> origin/gh/aditew01/openblas 2025-12-04T10:14:41.2065628Z * [new branch] gh/aditew01/sbgemm -> origin/gh/aditew01/sbgemm 2025-12-04T10:14:41.2066010Z * [new branch] gh/aditew01/vecbf16 -> origin/gh/aditew01/vecbf16 2025-12-04T10:14:41.2066379Z * [new branch] gh/albanD/4/base -> origin/gh/albanD/4/base 2025-12-04T10:14:41.2066735Z * [new branch] gh/albanD/4/head -> origin/gh/albanD/4/head 2025-12-04T10:14:41.2067088Z * [new branch] gh/albanD/4/orig -> origin/gh/albanD/4/orig 2025-12-04T10:14:41.2067705Z * [new branch] gh/alexbrauckmann/paddedtensor_faketensor_init -> origin/gh/alexbrauckmann/paddedtensor_faketensor_init 2025-12-04T10:14:41.2068258Z * [new branch] gh/alexsamardzic/12/base -> origin/gh/alexsamardzic/12/base 2025-12-04T10:14:41.2068674Z * [new branch] gh/alexsamardzic/12/head -> origin/gh/alexsamardzic/12/head 2025-12-04T10:14:41.2069081Z * [new branch] gh/alexsamardzic/12/orig -> origin/gh/alexsamardzic/12/orig 2025-12-04T10:14:41.2069486Z * [new branch] gh/alexsamardzic/14/base -> origin/gh/alexsamardzic/14/base 2025-12-04T10:14:41.2069893Z * [new branch] gh/alexsamardzic/14/head -> origin/gh/alexsamardzic/14/head 2025-12-04T10:14:41.2070296Z * [new branch] gh/alexsamardzic/14/orig -> origin/gh/alexsamardzic/14/orig 2025-12-04T10:14:41.2070744Z * [new branch] gh/alexsamardzic/15/base -> origin/gh/alexsamardzic/15/base 2025-12-04T10:14:41.2071155Z * [new branch] gh/alexsamardzic/15/head -> origin/gh/alexsamardzic/15/head 2025-12-04T10:14:41.2071558Z * [new branch] gh/alexsamardzic/15/orig -> origin/gh/alexsamardzic/15/orig 2025-12-04T10:14:41.2071946Z * [new branch] gh/amjames/18/base -> origin/gh/amjames/18/base 2025-12-04T10:14:41.2072317Z * [new branch] gh/amjames/18/head -> origin/gh/amjames/18/head 2025-12-04T10:14:41.2072686Z * [new branch] gh/amjames/18/orig -> origin/gh/amjames/18/orig 2025-12-04T10:14:41.2073067Z * [new branch] gh/andrewor14/35/base -> origin/gh/andrewor14/35/base 2025-12-04T10:14:41.2073448Z * [new branch] gh/andrewor14/35/head -> origin/gh/andrewor14/35/head 2025-12-04T10:14:41.2073828Z * [new branch] gh/andrewor14/35/orig -> origin/gh/andrewor14/35/orig 2025-12-04T10:14:41.2074208Z * [new branch] gh/andrewor14/50/base -> origin/gh/andrewor14/50/base 2025-12-04T10:14:41.2074588Z * [new branch] gh/andrewor14/50/head -> origin/gh/andrewor14/50/head 2025-12-04T10:14:41.2074962Z * [new branch] gh/andrewor14/50/orig -> origin/gh/andrewor14/50/orig 2025-12-04T10:14:41.2075344Z * [new branch] gh/andyanwang/30/base -> origin/gh/andyanwang/30/base 2025-12-04T10:14:41.2075723Z * [new branch] gh/andyanwang/30/orig -> origin/gh/andyanwang/30/orig 2025-12-04T10:14:41.2076099Z * [new branch] gh/andyanwang/31/base -> origin/gh/andyanwang/31/base 2025-12-04T10:14:41.2076553Z * [new branch] gh/andyanwang/31/orig -> origin/gh/andyanwang/31/orig 2025-12-04T10:14:41.2076930Z * [new branch] gh/andyanwang/39/base -> origin/gh/andyanwang/39/base 2025-12-04T10:14:41.2077306Z * [new branch] gh/andyanwang/39/head -> origin/gh/andyanwang/39/head 2025-12-04T10:14:41.2077688Z * [new branch] gh/andyanwang/39/orig -> origin/gh/andyanwang/39/orig 2025-12-04T10:14:41.2078075Z * [new branch] gh/andyanwang/42/base -> origin/gh/andyanwang/42/base 2025-12-04T10:14:41.2078451Z * [new branch] gh/andyanwang/42/head -> origin/gh/andyanwang/42/head 2025-12-04T10:14:41.2078835Z * [new branch] gh/andyanwang/42/orig -> origin/gh/andyanwang/42/orig 2025-12-04T10:14:41.2079214Z * [new branch] gh/andyanwang/45/base -> origin/gh/andyanwang/45/base 2025-12-04T10:14:41.2079596Z * [new branch] gh/andyanwang/45/head -> origin/gh/andyanwang/45/head 2025-12-04T10:14:41.2079978Z * [new branch] gh/andyanwang/45/orig -> origin/gh/andyanwang/45/orig 2025-12-04T10:14:41.2080353Z * [new branch] gh/angelayi/107/base -> origin/gh/angelayi/107/base 2025-12-04T10:14:41.2080770Z * [new branch] gh/angelayi/107/head -> origin/gh/angelayi/107/head 2025-12-04T10:14:41.2081220Z * [new branch] gh/angelayi/114/base -> origin/gh/angelayi/114/base 2025-12-04T10:14:41.2081591Z * [new branch] gh/angelayi/114/head -> origin/gh/angelayi/114/head 2025-12-04T10:14:41.2081961Z * [new branch] gh/angelayi/114/orig -> origin/gh/angelayi/114/orig 2025-12-04T10:14:41.2082331Z * [new branch] gh/angelayi/116/base -> origin/gh/angelayi/116/base 2025-12-04T10:14:41.2082699Z * [new branch] gh/angelayi/116/head -> origin/gh/angelayi/116/head 2025-12-04T10:14:41.2083069Z * [new branch] gh/angelayi/116/orig -> origin/gh/angelayi/116/orig 2025-12-04T10:14:41.2083442Z * [new branch] gh/angelayi/122/base -> origin/gh/angelayi/122/base 2025-12-04T10:14:41.2083811Z * [new branch] gh/angelayi/122/head -> origin/gh/angelayi/122/head 2025-12-04T10:14:41.2084184Z * [new branch] gh/angelayi/122/orig -> origin/gh/angelayi/122/orig 2025-12-04T10:14:41.2084556Z * [new branch] gh/angelayi/124/base -> origin/gh/angelayi/124/base 2025-12-04T10:14:41.2084923Z * [new branch] gh/angelayi/124/head -> origin/gh/angelayi/124/head 2025-12-04T10:14:41.2085296Z * [new branch] gh/angelayi/124/orig -> origin/gh/angelayi/124/orig 2025-12-04T10:14:41.2085667Z * [new branch] gh/angelayi/128/base -> origin/gh/angelayi/128/base 2025-12-04T10:14:41.2086032Z * [new branch] gh/angelayi/128/head -> origin/gh/angelayi/128/head 2025-12-04T10:14:41.2086403Z * [new branch] gh/angelayi/128/orig -> origin/gh/angelayi/128/orig 2025-12-04T10:14:41.2086781Z * [new branch] gh/angelayi/131/base -> origin/gh/angelayi/131/base 2025-12-04T10:14:41.2087686Z * [new branch] gh/angelayi/131/head -> origin/gh/angelayi/131/head 2025-12-04T10:14:41.2088055Z * [new branch] gh/angelayi/131/orig -> origin/gh/angelayi/131/orig 2025-12-04T10:14:41.2088425Z * [new branch] gh/angelayi/132/base -> origin/gh/angelayi/132/base 2025-12-04T10:14:41.2088795Z * [new branch] gh/angelayi/132/head -> origin/gh/angelayi/132/head 2025-12-04T10:14:41.2089162Z * [new branch] gh/angelayi/132/orig -> origin/gh/angelayi/132/orig 2025-12-04T10:14:41.2089528Z * [new branch] gh/angelayi/133/base -> origin/gh/angelayi/133/base 2025-12-04T10:14:41.2089896Z * [new branch] gh/angelayi/133/head -> origin/gh/angelayi/133/head 2025-12-04T10:14:41.2090267Z * [new branch] gh/angelayi/133/orig -> origin/gh/angelayi/133/orig 2025-12-04T10:14:41.2090783Z * [new branch] gh/angelayi/134/base -> origin/gh/angelayi/134/base 2025-12-04T10:14:41.2091153Z * [new branch] gh/angelayi/134/head -> origin/gh/angelayi/134/head 2025-12-04T10:14:41.2091522Z * [new branch] gh/angelayi/134/orig -> origin/gh/angelayi/134/orig 2025-12-04T10:14:41.2091894Z * [new branch] gh/angelayi/135/base -> origin/gh/angelayi/135/base 2025-12-04T10:14:41.2092266Z * [new branch] gh/angelayi/135/head -> origin/gh/angelayi/135/head 2025-12-04T10:14:41.2092634Z * [new branch] gh/angelayi/135/orig -> origin/gh/angelayi/135/orig 2025-12-04T10:14:41.2092999Z * [new branch] gh/angelayi/136/base -> origin/gh/angelayi/136/base 2025-12-04T10:14:41.2093367Z * [new branch] gh/angelayi/136/head -> origin/gh/angelayi/136/head 2025-12-04T10:14:41.2093739Z * [new branch] gh/angelayi/136/orig -> origin/gh/angelayi/136/orig 2025-12-04T10:14:41.2094110Z * [new branch] gh/angelayi/137/base -> origin/gh/angelayi/137/base 2025-12-04T10:14:41.2094480Z * [new branch] gh/angelayi/137/head -> origin/gh/angelayi/137/head 2025-12-04T10:14:41.2094852Z * [new branch] gh/angelayi/137/orig -> origin/gh/angelayi/137/orig 2025-12-04T10:14:41.2095287Z * [new branch] gh/angelayi/138/base -> origin/gh/angelayi/138/base 2025-12-04T10:14:41.2095655Z * [new branch] gh/angelayi/138/head -> origin/gh/angelayi/138/head 2025-12-04T10:14:41.2096023Z * [new branch] gh/angelayi/138/orig -> origin/gh/angelayi/138/orig 2025-12-04T10:14:41.2096388Z * [new branch] gh/angelayi/139/base -> origin/gh/angelayi/139/base 2025-12-04T10:14:41.2096757Z * [new branch] gh/angelayi/139/head -> origin/gh/angelayi/139/head 2025-12-04T10:14:41.2097129Z * [new branch] gh/angelayi/139/orig -> origin/gh/angelayi/139/orig 2025-12-04T10:14:41.2097499Z * [new branch] gh/angelayi/140/base -> origin/gh/angelayi/140/base 2025-12-04T10:14:41.2097865Z * [new branch] gh/angelayi/140/head -> origin/gh/angelayi/140/head 2025-12-04T10:14:41.2098230Z * [new branch] gh/angelayi/140/orig -> origin/gh/angelayi/140/orig 2025-12-04T10:14:41.2098606Z * [new branch] gh/angelayi/141/base -> origin/gh/angelayi/141/base 2025-12-04T10:14:41.2098976Z * [new branch] gh/angelayi/141/head -> origin/gh/angelayi/141/head 2025-12-04T10:14:41.2099341Z * [new branch] gh/angelayi/141/orig -> origin/gh/angelayi/141/orig 2025-12-04T10:14:41.2099710Z * [new branch] gh/angelayi/142/base -> origin/gh/angelayi/142/base 2025-12-04T10:14:41.2100084Z * [new branch] gh/angelayi/142/head -> origin/gh/angelayi/142/head 2025-12-04T10:14:41.2100451Z * [new branch] gh/angelayi/142/orig -> origin/gh/angelayi/142/orig 2025-12-04T10:14:41.2100894Z * [new branch] gh/angelayi/143/base -> origin/gh/angelayi/143/base 2025-12-04T10:14:41.2101423Z * [new branch] gh/angelayi/143/head -> origin/gh/angelayi/143/head 2025-12-04T10:14:41.2101797Z * [new branch] gh/angelayi/143/orig -> origin/gh/angelayi/143/orig 2025-12-04T10:14:41.2102173Z * [new branch] gh/angelayi/144/base -> origin/gh/angelayi/144/base 2025-12-04T10:14:41.2102546Z * [new branch] gh/angelayi/144/head -> origin/gh/angelayi/144/head 2025-12-04T10:14:41.2102917Z * [new branch] gh/angelayi/144/orig -> origin/gh/angelayi/144/orig 2025-12-04T10:14:41.2103305Z * [new branch] gh/anijain2305/753/base -> origin/gh/anijain2305/753/base 2025-12-04T10:14:41.2103699Z * [new branch] gh/anijain2305/753/head -> origin/gh/anijain2305/753/head 2025-12-04T10:14:41.2104163Z * [new branch] gh/anijain2305/753/orig -> origin/gh/anijain2305/753/orig 2025-12-04T10:14:41.2104549Z * [new branch] gh/anijain2305/810/base -> origin/gh/anijain2305/810/base 2025-12-04T10:14:41.2104931Z * [new branch] gh/anijain2305/810/head -> origin/gh/anijain2305/810/head 2025-12-04T10:14:41.2105309Z * [new branch] gh/anijain2305/810/orig -> origin/gh/anijain2305/810/orig 2025-12-04T10:14:41.2105694Z * [new branch] gh/anijain2305/854/base -> origin/gh/anijain2305/854/base 2025-12-04T10:14:41.2106083Z * [new branch] gh/anijain2305/854/head -> origin/gh/anijain2305/854/head 2025-12-04T10:14:41.2106462Z * [new branch] gh/anijain2305/854/orig -> origin/gh/anijain2305/854/orig 2025-12-04T10:14:41.2106842Z * [new branch] gh/anijain2305/864/base -> origin/gh/anijain2305/864/base 2025-12-04T10:14:41.2107223Z * [new branch] gh/anijain2305/864/head -> origin/gh/anijain2305/864/head 2025-12-04T10:14:41.2107609Z * [new branch] gh/anijain2305/864/orig -> origin/gh/anijain2305/864/orig 2025-12-04T10:14:41.2107994Z * [new branch] gh/anijain2305/870/base -> origin/gh/anijain2305/870/base 2025-12-04T10:14:41.2108373Z * [new branch] gh/anijain2305/870/head -> origin/gh/anijain2305/870/head 2025-12-04T10:14:41.2108819Z * [new branch] gh/anijain2305/870/orig -> origin/gh/anijain2305/870/orig 2025-12-04T10:14:41.2109206Z * [new branch] gh/anijain2305/873/base -> origin/gh/anijain2305/873/base 2025-12-04T10:14:41.2109584Z * [new branch] gh/anijain2305/873/head -> origin/gh/anijain2305/873/head 2025-12-04T10:14:41.2109964Z * [new branch] gh/anijain2305/873/orig -> origin/gh/anijain2305/873/orig 2025-12-04T10:14:41.2110349Z * [new branch] gh/anijain2305/894/base -> origin/gh/anijain2305/894/base 2025-12-04T10:14:41.2110777Z * [new branch] gh/anijain2305/894/head -> origin/gh/anijain2305/894/head 2025-12-04T10:14:41.2111163Z * [new branch] gh/anijain2305/894/orig -> origin/gh/anijain2305/894/orig 2025-12-04T10:14:41.2111543Z * [new branch] gh/anijain2305/895/base -> origin/gh/anijain2305/895/base 2025-12-04T10:14:41.2111921Z * [new branch] gh/anijain2305/895/head -> origin/gh/anijain2305/895/head 2025-12-04T10:14:41.2112307Z * [new branch] gh/anijain2305/895/orig -> origin/gh/anijain2305/895/orig 2025-12-04T10:14:41.2112690Z * [new branch] gh/anijain2305/910/base -> origin/gh/anijain2305/910/base 2025-12-04T10:14:41.2113066Z * [new branch] gh/anijain2305/910/head -> origin/gh/anijain2305/910/head 2025-12-04T10:14:41.2113447Z * [new branch] gh/anijain2305/910/orig -> origin/gh/anijain2305/910/orig 2025-12-04T10:14:41.2113829Z * [new branch] gh/anijain2305/919/base -> origin/gh/anijain2305/919/base 2025-12-04T10:14:41.2114211Z * [new branch] gh/anijain2305/919/head -> origin/gh/anijain2305/919/head 2025-12-04T10:14:41.2114591Z * [new branch] gh/anijain2305/919/orig -> origin/gh/anijain2305/919/orig 2025-12-04T10:14:41.2114976Z * [new branch] gh/anijain2305/922/base -> origin/gh/anijain2305/922/base 2025-12-04T10:14:41.2115352Z * [new branch] gh/anijain2305/922/head -> origin/gh/anijain2305/922/head 2025-12-04T10:14:41.2115735Z * [new branch] gh/anijain2305/922/orig -> origin/gh/anijain2305/922/orig 2025-12-04T10:14:41.2116118Z * [new branch] gh/anijain2305/932/base -> origin/gh/anijain2305/932/base 2025-12-04T10:14:41.2116501Z * [new branch] gh/anijain2305/932/head -> origin/gh/anijain2305/932/head 2025-12-04T10:14:41.2116881Z * [new branch] gh/anijain2305/932/orig -> origin/gh/anijain2305/932/orig 2025-12-04T10:14:41.2117260Z * [new branch] gh/anijain2305/940/base -> origin/gh/anijain2305/940/base 2025-12-04T10:14:41.2117713Z * [new branch] gh/anijain2305/940/head -> origin/gh/anijain2305/940/head 2025-12-04T10:14:41.2118097Z * [new branch] gh/anijain2305/940/orig -> origin/gh/anijain2305/940/orig 2025-12-04T10:14:41.2118475Z * [new branch] gh/anijain2305/941/base -> origin/gh/anijain2305/941/base 2025-12-04T10:14:41.2118857Z * [new branch] gh/anijain2305/941/head -> origin/gh/anijain2305/941/head 2025-12-04T10:14:41.2119238Z * [new branch] gh/anijain2305/941/orig -> origin/gh/anijain2305/941/orig 2025-12-04T10:14:41.2119618Z * [new branch] gh/anijain2305/942/base -> origin/gh/anijain2305/942/base 2025-12-04T10:14:41.2120000Z * [new branch] gh/anijain2305/942/head -> origin/gh/anijain2305/942/head 2025-12-04T10:14:41.2120379Z * [new branch] gh/anijain2305/942/orig -> origin/gh/anijain2305/942/orig 2025-12-04T10:14:41.2120803Z * [new branch] gh/anijain2305/943/base -> origin/gh/anijain2305/943/base 2025-12-04T10:14:41.2121193Z * [new branch] gh/anijain2305/943/head -> origin/gh/anijain2305/943/head 2025-12-04T10:14:41.2121570Z * [new branch] gh/anijain2305/943/orig -> origin/gh/anijain2305/943/orig 2025-12-04T10:14:41.2121947Z * [new branch] gh/anijain2305/944/base -> origin/gh/anijain2305/944/base 2025-12-04T10:14:41.2122391Z * [new branch] gh/anijain2305/944/head -> origin/gh/anijain2305/944/head 2025-12-04T10:14:41.2122774Z * [new branch] gh/anijain2305/944/orig -> origin/gh/anijain2305/944/orig 2025-12-04T10:14:41.2123152Z * [new branch] gh/anijain2305/945/base -> origin/gh/anijain2305/945/base 2025-12-04T10:14:41.2123530Z * [new branch] gh/anijain2305/945/head -> origin/gh/anijain2305/945/head 2025-12-04T10:14:41.2123910Z * [new branch] gh/anijain2305/945/orig -> origin/gh/anijain2305/945/orig 2025-12-04T10:14:41.2124296Z * [new branch] gh/anijain2305/946/base -> origin/gh/anijain2305/946/base 2025-12-04T10:14:41.2124677Z * [new branch] gh/anijain2305/946/head -> origin/gh/anijain2305/946/head 2025-12-04T10:14:41.2125054Z * [new branch] gh/anijain2305/946/orig -> origin/gh/anijain2305/946/orig 2025-12-04T10:14:41.2125431Z * [new branch] gh/anijain2305/947/base -> origin/gh/anijain2305/947/base 2025-12-04T10:14:41.2125819Z * [new branch] gh/anijain2305/947/head -> origin/gh/anijain2305/947/head 2025-12-04T10:14:41.2126198Z * [new branch] gh/anijain2305/947/orig -> origin/gh/anijain2305/947/orig 2025-12-04T10:14:41.2126576Z * [new branch] gh/anijain2305/948/base -> origin/gh/anijain2305/948/base 2025-12-04T10:14:41.2126956Z * [new branch] gh/anijain2305/948/head -> origin/gh/anijain2305/948/head 2025-12-04T10:14:41.2127339Z * [new branch] gh/anijain2305/948/orig -> origin/gh/anijain2305/948/orig 2025-12-04T10:14:41.2127725Z * [new branch] gh/anijain2305/949/base -> origin/gh/anijain2305/949/base 2025-12-04T10:14:41.2128107Z * [new branch] gh/anijain2305/949/head -> origin/gh/anijain2305/949/head 2025-12-04T10:14:41.2128483Z * [new branch] gh/anijain2305/949/orig -> origin/gh/anijain2305/949/orig 2025-12-04T10:14:41.2128868Z * [new branch] gh/anijain2305/950/base -> origin/gh/anijain2305/950/base 2025-12-04T10:14:41.2129249Z * [new branch] gh/anijain2305/950/head -> origin/gh/anijain2305/950/head 2025-12-04T10:14:41.2129626Z * [new branch] gh/anijain2305/950/orig -> origin/gh/anijain2305/950/orig 2025-12-04T10:14:41.2130008Z * [new branch] gh/anijain2305/951/base -> origin/gh/anijain2305/951/base 2025-12-04T10:14:41.2130389Z * [new branch] gh/anijain2305/951/head -> origin/gh/anijain2305/951/head 2025-12-04T10:14:41.2130820Z * [new branch] gh/anijain2305/951/orig -> origin/gh/anijain2305/951/orig 2025-12-04T10:14:41.2131275Z * [new branch] gh/anijain2305/952/base -> origin/gh/anijain2305/952/base 2025-12-04T10:14:41.2131662Z * [new branch] gh/anijain2305/952/head -> origin/gh/anijain2305/952/head 2025-12-04T10:14:41.2132040Z * [new branch] gh/anijain2305/952/orig -> origin/gh/anijain2305/952/orig 2025-12-04T10:14:41.2132422Z * [new branch] gh/anijain2305/953/base -> origin/gh/anijain2305/953/base 2025-12-04T10:14:41.2132801Z * [new branch] gh/anijain2305/953/head -> origin/gh/anijain2305/953/head 2025-12-04T10:14:41.2133178Z * [new branch] gh/anijain2305/953/orig -> origin/gh/anijain2305/953/orig 2025-12-04T10:14:41.2133566Z * [new branch] gh/anijain2305/954/base -> origin/gh/anijain2305/954/base 2025-12-04T10:14:41.2133947Z * [new branch] gh/anijain2305/954/head -> origin/gh/anijain2305/954/head 2025-12-04T10:14:41.2134328Z * [new branch] gh/anijain2305/954/orig -> origin/gh/anijain2305/954/orig 2025-12-04T10:14:41.2134708Z * [new branch] gh/anijain2305/955/base -> origin/gh/anijain2305/955/base 2025-12-04T10:14:41.2135093Z * [new branch] gh/anijain2305/955/head -> origin/gh/anijain2305/955/head 2025-12-04T10:14:41.2135471Z * [new branch] gh/anijain2305/955/orig -> origin/gh/anijain2305/955/orig 2025-12-04T10:14:41.2135917Z * [new branch] gh/anijain2305/956/base -> origin/gh/anijain2305/956/base 2025-12-04T10:14:41.2136297Z * [new branch] gh/anijain2305/956/head -> origin/gh/anijain2305/956/head 2025-12-04T10:14:41.2136683Z * [new branch] gh/anijain2305/956/orig -> origin/gh/anijain2305/956/orig 2025-12-04T10:14:41.2137061Z * [new branch] gh/anijain2305/957/base -> origin/gh/anijain2305/957/base 2025-12-04T10:14:41.2137440Z * [new branch] gh/anijain2305/957/head -> origin/gh/anijain2305/957/head 2025-12-04T10:14:41.2137825Z * [new branch] gh/anijain2305/957/orig -> origin/gh/anijain2305/957/orig 2025-12-04T10:14:41.2138209Z * [new branch] gh/anijain2305/958/base -> origin/gh/anijain2305/958/base 2025-12-04T10:14:41.2138590Z * [new branch] gh/anijain2305/958/head -> origin/gh/anijain2305/958/head 2025-12-04T10:14:41.2138974Z * [new branch] gh/anijain2305/958/orig -> origin/gh/anijain2305/958/orig 2025-12-04T10:14:41.2139357Z * [new branch] gh/anijain2305/959/base -> origin/gh/anijain2305/959/base 2025-12-04T10:14:41.2139736Z * [new branch] gh/anijain2305/959/head -> origin/gh/anijain2305/959/head 2025-12-04T10:14:41.2140120Z * [new branch] gh/anijain2305/959/orig -> origin/gh/anijain2305/959/orig 2025-12-04T10:14:41.2140501Z * [new branch] gh/anijain2305/960/base -> origin/gh/anijain2305/960/base 2025-12-04T10:14:41.2140917Z * [new branch] gh/anijain2305/960/head -> origin/gh/anijain2305/960/head 2025-12-04T10:14:41.2141303Z * [new branch] gh/anijain2305/960/orig -> origin/gh/anijain2305/960/orig 2025-12-04T10:14:41.2141688Z * [new branch] gh/anijain2305/961/base -> origin/gh/anijain2305/961/base 2025-12-04T10:14:41.2142066Z * [new branch] gh/anijain2305/961/head -> origin/gh/anijain2305/961/head 2025-12-04T10:14:41.2142450Z * [new branch] gh/anijain2305/961/orig -> origin/gh/anijain2305/961/orig 2025-12-04T10:14:41.2142831Z * [new branch] gh/anijain2305/962/base -> origin/gh/anijain2305/962/base 2025-12-04T10:14:41.2143210Z * [new branch] gh/anijain2305/962/head -> origin/gh/anijain2305/962/head 2025-12-04T10:14:41.2143592Z * [new branch] gh/anijain2305/962/orig -> origin/gh/anijain2305/962/orig 2025-12-04T10:14:41.2143974Z * [new branch] gh/anijain2305/963/base -> origin/gh/anijain2305/963/base 2025-12-04T10:14:41.2144436Z * [new branch] gh/anijain2305/963/head -> origin/gh/anijain2305/963/head 2025-12-04T10:14:41.2144816Z * [new branch] gh/anijain2305/963/orig -> origin/gh/anijain2305/963/orig 2025-12-04T10:14:41.2145193Z * [new branch] gh/anijain2305/964/base -> origin/gh/anijain2305/964/base 2025-12-04T10:14:41.2145574Z * [new branch] gh/anijain2305/964/head -> origin/gh/anijain2305/964/head 2025-12-04T10:14:41.2145956Z * [new branch] gh/anijain2305/964/orig -> origin/gh/anijain2305/964/orig 2025-12-04T10:14:41.2146334Z * [new branch] gh/anijain2305/965/base -> origin/gh/anijain2305/965/base 2025-12-04T10:14:41.2146715Z * [new branch] gh/anijain2305/965/head -> origin/gh/anijain2305/965/head 2025-12-04T10:14:41.2147097Z * [new branch] gh/anijain2305/965/orig -> origin/gh/anijain2305/965/orig 2025-12-04T10:14:41.2147477Z * [new branch] gh/anijain2305/966/base -> origin/gh/anijain2305/966/base 2025-12-04T10:14:41.2147860Z * [new branch] gh/anijain2305/966/head -> origin/gh/anijain2305/966/head 2025-12-04T10:14:41.2148242Z * [new branch] gh/anijain2305/966/orig -> origin/gh/anijain2305/966/orig 2025-12-04T10:14:41.2148620Z * [new branch] gh/anijain2305/967/base -> origin/gh/anijain2305/967/base 2025-12-04T10:14:41.2149068Z * [new branch] gh/anijain2305/967/head -> origin/gh/anijain2305/967/head 2025-12-04T10:14:41.2149449Z * [new branch] gh/anijain2305/967/orig -> origin/gh/anijain2305/967/orig 2025-12-04T10:14:41.2149826Z * [new branch] gh/anijain2305/968/base -> origin/gh/anijain2305/968/base 2025-12-04T10:14:41.2150204Z * [new branch] gh/anijain2305/968/head -> origin/gh/anijain2305/968/head 2025-12-04T10:14:41.2150584Z * [new branch] gh/anijain2305/968/orig -> origin/gh/anijain2305/968/orig 2025-12-04T10:14:41.2151007Z * [new branch] gh/anijain2305/969/base -> origin/gh/anijain2305/969/base 2025-12-04T10:14:41.2151391Z * [new branch] gh/anijain2305/969/head -> origin/gh/anijain2305/969/head 2025-12-04T10:14:41.2151772Z * [new branch] gh/anijain2305/969/orig -> origin/gh/anijain2305/969/orig 2025-12-04T10:14:41.2152154Z * [new branch] gh/anijain2305/970/base -> origin/gh/anijain2305/970/base 2025-12-04T10:14:41.2152541Z * [new branch] gh/anijain2305/970/head -> origin/gh/anijain2305/970/head 2025-12-04T10:14:41.2152921Z * [new branch] gh/anijain2305/970/orig -> origin/gh/anijain2305/970/orig 2025-12-04T10:14:41.2153304Z * [new branch] gh/anjali411/216/base -> origin/gh/anjali411/216/base 2025-12-04T10:14:41.2153687Z * [new branch] gh/anjali411/216/head -> origin/gh/anjali411/216/head 2025-12-04T10:14:41.2154060Z * [new branch] gh/anjali411/216/orig -> origin/gh/anjali411/216/orig 2025-12-04T10:14:41.2154434Z * [new branch] gh/anshul-si/1/base -> origin/gh/anshul-si/1/base 2025-12-04T10:14:41.2154810Z * [new branch] gh/anshul-si/1/head -> origin/gh/anshul-si/1/head 2025-12-04T10:14:41.2155176Z * [new branch] gh/anshul-si/2/base -> origin/gh/anshul-si/2/base 2025-12-04T10:14:41.2155547Z * [new branch] gh/anshul-si/2/head -> origin/gh/anshul-si/2/head 2025-12-04T10:14:41.2155919Z * [new branch] gh/anshul-si/3/base -> origin/gh/anshul-si/3/base 2025-12-04T10:14:41.2156279Z * [new branch] gh/anshul-si/3/head -> origin/gh/anshul-si/3/head 2025-12-04T10:14:41.2156640Z * [new branch] gh/anshul-si/4/base -> origin/gh/anshul-si/4/base 2025-12-04T10:14:41.2157001Z * [new branch] gh/anshul-si/4/head -> origin/gh/anshul-si/4/head 2025-12-04T10:14:41.2157364Z * [new branch] gh/anshul-si/5/base -> origin/gh/anshul-si/5/base 2025-12-04T10:14:41.2157727Z * [new branch] gh/anshul-si/5/head -> origin/gh/anshul-si/5/head 2025-12-04T10:14:41.2158169Z * [new branch] gh/anshul-si/53/base -> origin/gh/anshul-si/53/base 2025-12-04T10:14:41.2158541Z * [new branch] gh/anshul-si/53/head -> origin/gh/anshul-si/53/head 2025-12-04T10:14:41.2158913Z * [new branch] gh/anshul-si/58/base -> origin/gh/anshul-si/58/base 2025-12-04T10:14:41.2159285Z * [new branch] gh/anshul-si/58/head -> origin/gh/anshul-si/58/head 2025-12-04T10:14:41.2159649Z * [new branch] gh/anshul-si/66/base -> origin/gh/anshul-si/66/base 2025-12-04T10:14:41.2160020Z * [new branch] gh/anshul-si/66/head -> origin/gh/anshul-si/66/head 2025-12-04T10:14:41.2160387Z * [new branch] gh/anshul-si/66/orig -> origin/gh/anshul-si/66/orig 2025-12-04T10:14:41.2160819Z * [new branch] gh/anshul-si/67/base -> origin/gh/anshul-si/67/base 2025-12-04T10:14:41.2161191Z * [new branch] gh/anshul-si/67/head -> origin/gh/anshul-si/67/head 2025-12-04T10:14:41.2161560Z * [new branch] gh/anshul-si/67/orig -> origin/gh/anshul-si/67/orig 2025-12-04T10:14:41.2161922Z * [new branch] gh/anshul-si/68/base -> origin/gh/anshul-si/68/base 2025-12-04T10:14:41.2162288Z * [new branch] gh/anshul-si/68/head -> origin/gh/anshul-si/68/head 2025-12-04T10:14:41.2162733Z * [new branch] gh/anshul-si/68/orig -> origin/gh/anshul-si/68/orig 2025-12-04T10:14:41.2163101Z * [new branch] gh/anshul-si/69/base -> origin/gh/anshul-si/69/base 2025-12-04T10:14:41.2163468Z * [new branch] gh/anshul-si/69/head -> origin/gh/anshul-si/69/head 2025-12-04T10:14:41.2163833Z * [new branch] gh/anshul-si/69/orig -> origin/gh/anshul-si/69/orig 2025-12-04T10:14:41.2164202Z * [new branch] gh/anshul-si/70/base -> origin/gh/anshul-si/70/base 2025-12-04T10:14:41.2164575Z * [new branch] gh/anshul-si/70/head -> origin/gh/anshul-si/70/head 2025-12-04T10:14:41.2164940Z * [new branch] gh/anshul-si/70/orig -> origin/gh/anshul-si/70/orig 2025-12-04T10:14:41.2165305Z * [new branch] gh/anshul-si/71/base -> origin/gh/anshul-si/71/base 2025-12-04T10:14:41.2165670Z * [new branch] gh/anshul-si/71/head -> origin/gh/anshul-si/71/head 2025-12-04T10:14:41.2166038Z * [new branch] gh/anshul-si/71/orig -> origin/gh/anshul-si/71/orig 2025-12-04T10:14:41.2166407Z * [new branch] gh/anshul-si/72/base -> origin/gh/anshul-si/72/base 2025-12-04T10:14:41.2166772Z * [new branch] gh/anshul-si/72/head -> origin/gh/anshul-si/72/head 2025-12-04T10:14:41.2167136Z * [new branch] gh/anshul-si/72/orig -> origin/gh/anshul-si/72/orig 2025-12-04T10:14:41.2167502Z * [new branch] gh/anshul-si/73/base -> origin/gh/anshul-si/73/base 2025-12-04T10:14:41.2167876Z * [new branch] gh/anshul-si/73/head -> origin/gh/anshul-si/73/head 2025-12-04T10:14:41.2168241Z * [new branch] gh/anshul-si/73/orig -> origin/gh/anshul-si/73/orig 2025-12-04T10:14:41.2168611Z * [new branch] gh/aorenste/132/base -> origin/gh/aorenste/132/base 2025-12-04T10:14:41.2168987Z * [new branch] gh/aorenste/132/head -> origin/gh/aorenste/132/head 2025-12-04T10:14:41.2169360Z * [new branch] gh/aorenste/134/base -> origin/gh/aorenste/134/base 2025-12-04T10:14:41.2169738Z * [new branch] gh/aorenste/134/head -> origin/gh/aorenste/134/head 2025-12-04T10:14:41.2170107Z * [new branch] gh/aorenste/134/orig -> origin/gh/aorenste/134/orig 2025-12-04T10:14:41.2170483Z * [new branch] gh/aorenste/139/base -> origin/gh/aorenste/139/base 2025-12-04T10:14:41.2170970Z * [new branch] gh/aorenste/139/head -> origin/gh/aorenste/139/head 2025-12-04T10:14:41.2171420Z * [new branch] gh/aorenste/139/orig -> origin/gh/aorenste/139/orig 2025-12-04T10:14:41.2171796Z * [new branch] gh/aorenste/141/base -> origin/gh/aorenste/141/base 2025-12-04T10:14:41.2172167Z * [new branch] gh/aorenste/141/head -> origin/gh/aorenste/141/head 2025-12-04T10:14:41.2172534Z * [new branch] gh/aorenste/145/base -> origin/gh/aorenste/145/base 2025-12-04T10:14:41.2172911Z * [new branch] gh/aorenste/145/head -> origin/gh/aorenste/145/head 2025-12-04T10:14:41.2173161Z * [new branch] gh/aorenste/145/orig -> origin/gh/aorenste/145/orig 2025-12-04T10:14:41.2173339Z * [new branch] gh/aorenste/146/base -> origin/gh/aorenste/146/base 2025-12-04T10:14:41.2173518Z * [new branch] gh/aorenste/146/head -> origin/gh/aorenste/146/head 2025-12-04T10:14:41.2173701Z * [new branch] gh/aorenste/146/orig -> origin/gh/aorenste/146/orig 2025-12-04T10:14:41.2173881Z * [new branch] gh/aorenste/147/base -> origin/gh/aorenste/147/base 2025-12-04T10:14:41.2174059Z * [new branch] gh/aorenste/147/head -> origin/gh/aorenste/147/head 2025-12-04T10:14:41.2174238Z * [new branch] gh/aorenste/147/orig -> origin/gh/aorenste/147/orig 2025-12-04T10:14:41.2174458Z * [new branch] gh/aorenste/148/base -> origin/gh/aorenste/148/base 2025-12-04T10:14:41.2174640Z * [new branch] gh/aorenste/148/head -> origin/gh/aorenste/148/head 2025-12-04T10:14:41.2174819Z * [new branch] gh/aorenste/148/orig -> origin/gh/aorenste/148/orig 2025-12-04T10:14:41.2174997Z * [new branch] gh/aorenste/149/base -> origin/gh/aorenste/149/base 2025-12-04T10:14:41.2175175Z * [new branch] gh/aorenste/149/head -> origin/gh/aorenste/149/head 2025-12-04T10:14:41.2175356Z * [new branch] gh/aorenste/149/orig -> origin/gh/aorenste/149/orig 2025-12-04T10:14:41.2175537Z * [new branch] gh/aorenste/150/base -> origin/gh/aorenste/150/base 2025-12-04T10:14:41.2175724Z * [new branch] gh/aorenste/150/head -> origin/gh/aorenste/150/head 2025-12-04T10:14:41.2175904Z * [new branch] gh/aorenste/150/orig -> origin/gh/aorenste/150/orig 2025-12-04T10:14:41.2176083Z * [new branch] gh/aorenste/151/base -> origin/gh/aorenste/151/base 2025-12-04T10:14:41.2176261Z * [new branch] gh/aorenste/151/head -> origin/gh/aorenste/151/head 2025-12-04T10:14:41.2176441Z * [new branch] gh/aorenste/151/orig -> origin/gh/aorenste/151/orig 2025-12-04T10:14:41.2176619Z * [new branch] gh/aorenste/152/base -> origin/gh/aorenste/152/base 2025-12-04T10:14:41.2176799Z * [new branch] gh/aorenste/152/head -> origin/gh/aorenste/152/head 2025-12-04T10:14:41.2176974Z * [new branch] gh/aorenste/152/orig -> origin/gh/aorenste/152/orig 2025-12-04T10:14:41.2177157Z * [new branch] gh/aorenste/153/base -> origin/gh/aorenste/153/base 2025-12-04T10:14:41.2177338Z * [new branch] gh/aorenste/153/head -> origin/gh/aorenste/153/head 2025-12-04T10:14:41.2177514Z * [new branch] gh/aorenste/153/orig -> origin/gh/aorenste/153/orig 2025-12-04T10:14:41.2177696Z * [new branch] gh/aorenste/154/base -> origin/gh/aorenste/154/base 2025-12-04T10:14:41.2177879Z * [new branch] gh/aorenste/154/head -> origin/gh/aorenste/154/head 2025-12-04T10:14:41.2178056Z * [new branch] gh/aorenste/154/orig -> origin/gh/aorenste/154/orig 2025-12-04T10:14:41.2178234Z * [new branch] gh/aorenste/155/base -> origin/gh/aorenste/155/base 2025-12-04T10:14:41.2178413Z * [new branch] gh/aorenste/155/head -> origin/gh/aorenste/155/head 2025-12-04T10:14:41.2178592Z * [new branch] gh/aorenste/155/orig -> origin/gh/aorenste/155/orig 2025-12-04T10:14:41.2178805Z * [new branch] gh/aorenste/156/base -> origin/gh/aorenste/156/base 2025-12-04T10:14:41.2178983Z * [new branch] gh/aorenste/156/head -> origin/gh/aorenste/156/head 2025-12-04T10:14:41.2179159Z * [new branch] gh/aorenste/156/orig -> origin/gh/aorenste/156/orig 2025-12-04T10:14:41.2179340Z * [new branch] gh/aorenste/157/base -> origin/gh/aorenste/157/base 2025-12-04T10:14:41.2179521Z * [new branch] gh/aorenste/157/head -> origin/gh/aorenste/157/head 2025-12-04T10:14:41.2179698Z * [new branch] gh/aorenste/157/orig -> origin/gh/aorenste/157/orig 2025-12-04T10:14:41.2179875Z * [new branch] gh/aorenste/158/base -> origin/gh/aorenste/158/base 2025-12-04T10:14:41.2180052Z * [new branch] gh/aorenste/158/head -> origin/gh/aorenste/158/head 2025-12-04T10:14:41.2180235Z * [new branch] gh/aorenste/158/orig -> origin/gh/aorenste/158/orig 2025-12-04T10:14:41.2180415Z * [new branch] gh/aorenste/159/base -> origin/gh/aorenste/159/base 2025-12-04T10:14:41.2180591Z * [new branch] gh/aorenste/159/head -> origin/gh/aorenste/159/head 2025-12-04T10:14:41.2180816Z * [new branch] gh/aorenste/159/orig -> origin/gh/aorenste/159/orig 2025-12-04T10:14:41.2181055Z * [new branch] gh/avikchaudhuri/1/base -> origin/gh/avikchaudhuri/1/base 2025-12-04T10:14:41.2181247Z * [new branch] gh/avikchaudhuri/1/head -> origin/gh/avikchaudhuri/1/head 2025-12-04T10:14:41.2181439Z * [new branch] gh/avikchaudhuri/2/base -> origin/gh/avikchaudhuri/2/base 2025-12-04T10:14:41.2181630Z * [new branch] gh/avikchaudhuri/2/head -> origin/gh/avikchaudhuri/2/head 2025-12-04T10:14:41.2181819Z * [new branch] gh/avikchaudhuri/2/orig -> origin/gh/avikchaudhuri/2/orig 2025-12-04T10:14:41.2182006Z * [new branch] gh/bdhirsh/666/base -> origin/gh/bdhirsh/666/base 2025-12-04T10:14:41.2182184Z * [new branch] gh/bdhirsh/666/head -> origin/gh/bdhirsh/666/head 2025-12-04T10:14:41.2182359Z * [new branch] gh/bdhirsh/666/orig -> origin/gh/bdhirsh/666/orig 2025-12-04T10:14:41.2182539Z * [new branch] gh/bdhirsh/668/base -> origin/gh/bdhirsh/668/base 2025-12-04T10:14:41.2182719Z * [new branch] gh/bdhirsh/668/head -> origin/gh/bdhirsh/668/head 2025-12-04T10:14:41.2182894Z * [new branch] gh/bdhirsh/668/orig -> origin/gh/bdhirsh/668/orig 2025-12-04T10:14:41.2183073Z * [new branch] gh/bdhirsh/669/base -> origin/gh/bdhirsh/669/base 2025-12-04T10:14:41.2183247Z * [new branch] gh/bdhirsh/669/head -> origin/gh/bdhirsh/669/head 2025-12-04T10:14:41.2183419Z * [new branch] gh/bdhirsh/669/orig -> origin/gh/bdhirsh/669/orig 2025-12-04T10:14:41.2183601Z * [new branch] gh/bdhirsh/670/base -> origin/gh/bdhirsh/670/base 2025-12-04T10:14:41.2183773Z * [new branch] gh/bdhirsh/670/head -> origin/gh/bdhirsh/670/head 2025-12-04T10:14:41.2183947Z * [new branch] gh/bdhirsh/670/orig -> origin/gh/bdhirsh/670/orig 2025-12-04T10:14:41.2184124Z * [new branch] gh/bdhirsh/672/base -> origin/gh/bdhirsh/672/base 2025-12-04T10:14:41.2184302Z * [new branch] gh/bdhirsh/672/head -> origin/gh/bdhirsh/672/head 2025-12-04T10:14:41.2184476Z * [new branch] gh/bdhirsh/672/orig -> origin/gh/bdhirsh/672/orig 2025-12-04T10:14:41.2184653Z * [new branch] gh/bdhirsh/675/base -> origin/gh/bdhirsh/675/base 2025-12-04T10:14:41.2184833Z * [new branch] gh/bdhirsh/675/head -> origin/gh/bdhirsh/675/head 2025-12-04T10:14:41.2185007Z * [new branch] gh/bdhirsh/675/orig -> origin/gh/bdhirsh/675/orig 2025-12-04T10:14:41.2185228Z * [new branch] gh/bdhirsh/676/base -> origin/gh/bdhirsh/676/base 2025-12-04T10:14:41.2185405Z * [new branch] gh/bdhirsh/676/head -> origin/gh/bdhirsh/676/head 2025-12-04T10:14:41.2185579Z * [new branch] gh/bdhirsh/676/orig -> origin/gh/bdhirsh/676/orig 2025-12-04T10:14:41.2185648Z * [new branch] gh/bdhirsh/677/base -> origin/gh/bdhirsh/677/base 2025-12-04T10:14:41.2185719Z * [new branch] gh/bdhirsh/677/head -> origin/gh/bdhirsh/677/head 2025-12-04T10:14:41.2185790Z * [new branch] gh/bdhirsh/677/orig -> origin/gh/bdhirsh/677/orig 2025-12-04T10:14:41.2185861Z * [new branch] gh/bdhirsh/678/base -> origin/gh/bdhirsh/678/base 2025-12-04T10:14:41.2185932Z * [new branch] gh/bdhirsh/678/head -> origin/gh/bdhirsh/678/head 2025-12-04T10:14:41.2185999Z * [new branch] gh/bdhirsh/678/orig -> origin/gh/bdhirsh/678/orig 2025-12-04T10:14:41.2186069Z * [new branch] gh/bdhirsh/679/base -> origin/gh/bdhirsh/679/base 2025-12-04T10:14:41.2186140Z * [new branch] gh/bdhirsh/679/head -> origin/gh/bdhirsh/679/head 2025-12-04T10:14:41.2186206Z * [new branch] gh/bdhirsh/679/orig -> origin/gh/bdhirsh/679/orig 2025-12-04T10:14:41.2186274Z * [new branch] gh/bdhirsh/680/base -> origin/gh/bdhirsh/680/base 2025-12-04T10:14:41.2186377Z * [new branch] gh/bdhirsh/680/head -> origin/gh/bdhirsh/680/head 2025-12-04T10:14:41.2186446Z * [new branch] gh/bdhirsh/680/orig -> origin/gh/bdhirsh/680/orig 2025-12-04T10:14:41.2186515Z * [new branch] gh/bdhirsh/681/base -> origin/gh/bdhirsh/681/base 2025-12-04T10:14:41.2186583Z * [new branch] gh/bdhirsh/681/head -> origin/gh/bdhirsh/681/head 2025-12-04T10:14:41.2186652Z * [new branch] gh/bdhirsh/681/orig -> origin/gh/bdhirsh/681/orig 2025-12-04T10:14:41.2186745Z * [new branch] gh/benjaminglass1/101/base -> origin/gh/benjaminglass1/101/base 2025-12-04T10:14:41.2186833Z * [new branch] gh/benjaminglass1/101/head -> origin/gh/benjaminglass1/101/head 2025-12-04T10:14:41.2186919Z * [new branch] gh/benjaminglass1/101/orig -> origin/gh/benjaminglass1/101/orig 2025-12-04T10:14:41.2187008Z * [new branch] gh/benjaminglass1/102/base -> origin/gh/benjaminglass1/102/base 2025-12-04T10:14:41.2187094Z * [new branch] gh/benjaminglass1/102/head -> origin/gh/benjaminglass1/102/head 2025-12-04T10:14:41.2187178Z * [new branch] gh/benjaminglass1/102/orig -> origin/gh/benjaminglass1/102/orig 2025-12-04T10:14:41.2187266Z * [new branch] gh/benjaminglass1/106/base -> origin/gh/benjaminglass1/106/base 2025-12-04T10:14:41.2187351Z * [new branch] gh/benjaminglass1/106/head -> origin/gh/benjaminglass1/106/head 2025-12-04T10:14:41.2187435Z * [new branch] gh/benjaminglass1/106/orig -> origin/gh/benjaminglass1/106/orig 2025-12-04T10:14:41.2187522Z * [new branch] gh/benjaminglass1/107/base -> origin/gh/benjaminglass1/107/base 2025-12-04T10:14:41.2187606Z * [new branch] gh/benjaminglass1/107/head -> origin/gh/benjaminglass1/107/head 2025-12-04T10:14:41.2187692Z * [new branch] gh/benjaminglass1/107/orig -> origin/gh/benjaminglass1/107/orig 2025-12-04T10:14:41.2187781Z * [new branch] gh/benjaminglass1/108/base -> origin/gh/benjaminglass1/108/base 2025-12-04T10:14:41.2187865Z * [new branch] gh/benjaminglass1/108/head -> origin/gh/benjaminglass1/108/head 2025-12-04T10:14:41.2187949Z * [new branch] gh/benjaminglass1/108/orig -> origin/gh/benjaminglass1/108/orig 2025-12-04T10:14:41.2188037Z * [new branch] gh/benjaminglass1/109/base -> origin/gh/benjaminglass1/109/base 2025-12-04T10:14:41.2188129Z * [new branch] gh/benjaminglass1/109/head -> origin/gh/benjaminglass1/109/head 2025-12-04T10:14:41.2188266Z * [new branch] gh/benjaminglass1/109/orig -> origin/gh/benjaminglass1/109/orig 2025-12-04T10:14:41.2188352Z * [new branch] gh/benjaminglass1/97/base -> origin/gh/benjaminglass1/97/base 2025-12-04T10:14:41.2188436Z * [new branch] gh/benjaminglass1/97/head -> origin/gh/benjaminglass1/97/head 2025-12-04T10:14:41.2188524Z * [new branch] gh/benjaminglass1/97/orig -> origin/gh/benjaminglass1/97/orig 2025-12-04T10:14:41.2188602Z * [new branch] gh/bobrenjc93/570/base -> origin/gh/bobrenjc93/570/base 2025-12-04T10:14:41.2188677Z * [new branch] gh/bobrenjc93/570/head -> origin/gh/bobrenjc93/570/head 2025-12-04T10:14:41.2188753Z * [new branch] gh/bobrenjc93/570/orig -> origin/gh/bobrenjc93/570/orig 2025-12-04T10:14:41.2188826Z * [new branch] gh/bobrenjc93/604/base -> origin/gh/bobrenjc93/604/base 2025-12-04T10:14:41.2188901Z * [new branch] gh/bobrenjc93/604/head -> origin/gh/bobrenjc93/604/head 2025-12-04T10:14:41.2188974Z * [new branch] gh/bobrenjc93/604/orig -> origin/gh/bobrenjc93/604/orig 2025-12-04T10:14:41.2189045Z * [new branch] gh/bobrenjc93/638/base -> origin/gh/bobrenjc93/638/base 2025-12-04T10:14:41.2189119Z * [new branch] gh/bobrenjc93/638/head -> origin/gh/bobrenjc93/638/head 2025-12-04T10:14:41.2189220Z * [new branch] gh/bobrenjc93/638/orig -> origin/gh/bobrenjc93/638/orig 2025-12-04T10:14:41.2189294Z * [new branch] gh/bobrenjc93/653/base -> origin/gh/bobrenjc93/653/base 2025-12-04T10:14:41.2189366Z * [new branch] gh/bobrenjc93/653/head -> origin/gh/bobrenjc93/653/head 2025-12-04T10:14:41.2189440Z * [new branch] gh/bobrenjc93/653/orig -> origin/gh/bobrenjc93/653/orig 2025-12-04T10:14:41.2189512Z * [new branch] gh/bobrenjc93/654/base -> origin/gh/bobrenjc93/654/base 2025-12-04T10:14:41.2189587Z * [new branch] gh/bobrenjc93/654/head -> origin/gh/bobrenjc93/654/head 2025-12-04T10:14:41.2189659Z * [new branch] gh/bobrenjc93/654/orig -> origin/gh/bobrenjc93/654/orig 2025-12-04T10:14:41.2189730Z * [new branch] gh/bobrenjc93/657/base -> origin/gh/bobrenjc93/657/base 2025-12-04T10:14:41.2189807Z * [new branch] gh/bobrenjc93/657/head -> origin/gh/bobrenjc93/657/head 2025-12-04T10:14:41.2189882Z * [new branch] gh/bobrenjc93/657/orig -> origin/gh/bobrenjc93/657/orig 2025-12-04T10:14:41.2189954Z * [new branch] gh/bobrenjc93/672/base -> origin/gh/bobrenjc93/672/base 2025-12-04T10:14:41.2190028Z * [new branch] gh/bobrenjc93/672/head -> origin/gh/bobrenjc93/672/head 2025-12-04T10:14:41.2190100Z * [new branch] gh/bobrenjc93/672/orig -> origin/gh/bobrenjc93/672/orig 2025-12-04T10:14:41.2190172Z * [new branch] gh/bobrenjc93/679/base -> origin/gh/bobrenjc93/679/base 2025-12-04T10:14:41.2190248Z * [new branch] gh/bobrenjc93/679/head -> origin/gh/bobrenjc93/679/head 2025-12-04T10:14:41.2190321Z * [new branch] gh/bobrenjc93/679/orig -> origin/gh/bobrenjc93/679/orig 2025-12-04T10:14:41.2190393Z * [new branch] gh/bobrenjc93/680/base -> origin/gh/bobrenjc93/680/base 2025-12-04T10:14:41.2190468Z * [new branch] gh/bobrenjc93/680/head -> origin/gh/bobrenjc93/680/head 2025-12-04T10:14:41.2190542Z * [new branch] gh/bobrenjc93/680/orig -> origin/gh/bobrenjc93/680/orig 2025-12-04T10:14:41.2190661Z * [new branch] gh/bobrenjc93/681/base -> origin/gh/bobrenjc93/681/base 2025-12-04T10:14:41.2190737Z * [new branch] gh/bobrenjc93/681/head -> origin/gh/bobrenjc93/681/head 2025-12-04T10:14:41.2190808Z * [new branch] gh/bobrenjc93/681/orig -> origin/gh/bobrenjc93/681/orig 2025-12-04T10:14:41.2190880Z * [new branch] gh/bobrenjc93/682/base -> origin/gh/bobrenjc93/682/base 2025-12-04T10:14:41.2190998Z * [new branch] gh/bobrenjc93/682/head -> origin/gh/bobrenjc93/682/head 2025-12-04T10:14:41.2191069Z * [new branch] gh/bobrenjc93/682/orig -> origin/gh/bobrenjc93/682/orig 2025-12-04T10:14:41.2191140Z * [new branch] gh/bobrenjc93/683/base -> origin/gh/bobrenjc93/683/base 2025-12-04T10:14:41.2191221Z * [new branch] gh/bobrenjc93/683/head -> origin/gh/bobrenjc93/683/head 2025-12-04T10:14:41.2191294Z * [new branch] gh/bobrenjc93/683/orig -> origin/gh/bobrenjc93/683/orig 2025-12-04T10:14:41.2191368Z * [new branch] gh/bobrenjc93/684/base -> origin/gh/bobrenjc93/684/base 2025-12-04T10:14:41.2191440Z * [new branch] gh/bobrenjc93/684/head -> origin/gh/bobrenjc93/684/head 2025-12-04T10:14:41.2191512Z * [new branch] gh/bobrenjc93/684/orig -> origin/gh/bobrenjc93/684/orig 2025-12-04T10:14:41.2191587Z * [new branch] gh/bobrenjc93/685/base -> origin/gh/bobrenjc93/685/base 2025-12-04T10:14:41.2191658Z * [new branch] gh/bobrenjc93/685/head -> origin/gh/bobrenjc93/685/head 2025-12-04T10:14:41.2191730Z * [new branch] gh/bobrenjc93/685/orig -> origin/gh/bobrenjc93/685/orig 2025-12-04T10:14:41.2191804Z * [new branch] gh/bobrenjc93/686/base -> origin/gh/bobrenjc93/686/base 2025-12-04T10:14:41.2191922Z * [new branch] gh/bobrenjc93/686/head -> origin/gh/bobrenjc93/686/head 2025-12-04T10:14:41.2191997Z * [new branch] gh/bobrenjc93/686/orig -> origin/gh/bobrenjc93/686/orig 2025-12-04T10:14:41.2192071Z * [new branch] gh/bobrenjc93/687/base -> origin/gh/bobrenjc93/687/base 2025-12-04T10:14:41.2192143Z * [new branch] gh/bobrenjc93/687/head -> origin/gh/bobrenjc93/687/head 2025-12-04T10:14:41.2192216Z * [new branch] gh/bobrenjc93/687/orig -> origin/gh/bobrenjc93/687/orig 2025-12-04T10:14:41.2192290Z * [new branch] gh/bobrenjc93/688/base -> origin/gh/bobrenjc93/688/base 2025-12-04T10:14:41.2192362Z * [new branch] gh/bobrenjc93/688/head -> origin/gh/bobrenjc93/688/head 2025-12-04T10:14:41.2192437Z * [new branch] gh/bobrenjc93/688/orig -> origin/gh/bobrenjc93/688/orig 2025-12-04T10:14:41.2192519Z * [new branch] gh/bobrenjc93/689/base -> origin/gh/bobrenjc93/689/base 2025-12-04T10:14:41.2192591Z * [new branch] gh/bobrenjc93/689/head -> origin/gh/bobrenjc93/689/head 2025-12-04T10:14:41.2192663Z * [new branch] gh/bobrenjc93/689/orig -> origin/gh/bobrenjc93/689/orig 2025-12-04T10:14:41.2192737Z * [new branch] gh/bobrenjc93/690/base -> origin/gh/bobrenjc93/690/base 2025-12-04T10:14:41.2192809Z * [new branch] gh/bobrenjc93/690/head -> origin/gh/bobrenjc93/690/head 2025-12-04T10:14:41.2192883Z * [new branch] gh/bobrenjc93/690/orig -> origin/gh/bobrenjc93/690/orig 2025-12-04T10:14:41.2192957Z * [new branch] gh/bobrenjc93/691/base -> origin/gh/bobrenjc93/691/base 2025-12-04T10:14:41.2193031Z * [new branch] gh/bobrenjc93/691/head -> origin/gh/bobrenjc93/691/head 2025-12-04T10:14:41.2193108Z * [new branch] gh/bobrenjc93/691/orig -> origin/gh/bobrenjc93/691/orig 2025-12-04T10:14:41.2193181Z * [new branch] gh/bobrenjc93/692/base -> origin/gh/bobrenjc93/692/base 2025-12-04T10:14:41.2193253Z * [new branch] gh/bobrenjc93/692/head -> origin/gh/bobrenjc93/692/head 2025-12-04T10:14:41.2193326Z * [new branch] gh/bobrenjc93/692/orig -> origin/gh/bobrenjc93/692/orig 2025-12-04T10:14:41.2193398Z * [new branch] gh/bobrenjc93/693/base -> origin/gh/bobrenjc93/693/base 2025-12-04T10:14:41.2193470Z * [new branch] gh/bobrenjc93/693/head -> origin/gh/bobrenjc93/693/head 2025-12-04T10:14:41.2193544Z * [new branch] gh/bobrenjc93/693/orig -> origin/gh/bobrenjc93/693/orig 2025-12-04T10:14:41.2193927Z * [new branch] gh/bobrenjc93/694/base -> origin/gh/bobrenjc93/694/base 2025-12-04T10:14:41.2193999Z * [new branch] gh/bobrenjc93/694/head -> origin/gh/bobrenjc93/694/head 2025-12-04T10:14:41.2194077Z * [new branch] gh/bobrenjc93/694/orig -> origin/gh/bobrenjc93/694/orig 2025-12-04T10:14:41.2194150Z * [new branch] gh/bobrenjc93/695/base -> origin/gh/bobrenjc93/695/base 2025-12-04T10:14:41.2194223Z * [new branch] gh/bobrenjc93/695/head -> origin/gh/bobrenjc93/695/head 2025-12-04T10:14:41.2194296Z * [new branch] gh/bobrenjc93/695/orig -> origin/gh/bobrenjc93/695/orig 2025-12-04T10:14:41.2194365Z * [new branch] gh/c00w/23/base -> origin/gh/c00w/23/base 2025-12-04T10:14:41.2194431Z * [new branch] gh/c00w/23/head -> origin/gh/c00w/23/head 2025-12-04T10:14:41.2194497Z * [new branch] gh/c00w/53/base -> origin/gh/c00w/53/base 2025-12-04T10:14:41.2194561Z * [new branch] gh/c00w/53/head -> origin/gh/c00w/53/head 2025-12-04T10:14:41.2194623Z * [new branch] gh/c00w/53/orig -> origin/gh/c00w/53/orig 2025-12-04T10:14:41.2194691Z * [new branch] gh/c00w/54/base -> origin/gh/c00w/54/base 2025-12-04T10:14:41.2194792Z * [new branch] gh/c00w/54/head -> origin/gh/c00w/54/head 2025-12-04T10:14:41.2194858Z * [new branch] gh/c00w/54/orig -> origin/gh/c00w/54/orig 2025-12-04T10:14:41.2194921Z * [new branch] gh/c00w/56/base -> origin/gh/c00w/56/base 2025-12-04T10:14:41.2194983Z * [new branch] gh/c00w/56/head -> origin/gh/c00w/56/head 2025-12-04T10:14:41.2195046Z * [new branch] gh/c00w/56/orig -> origin/gh/c00w/56/orig 2025-12-04T10:14:41.2195109Z * [new branch] gh/c00w/57/base -> origin/gh/c00w/57/base 2025-12-04T10:14:41.2195175Z * [new branch] gh/c00w/57/head -> origin/gh/c00w/57/head 2025-12-04T10:14:41.2195242Z * [new branch] gh/c00w/57/orig -> origin/gh/c00w/57/orig 2025-12-04T10:14:41.2195304Z * [new branch] gh/c00w/58/base -> origin/gh/c00w/58/base 2025-12-04T10:14:41.2195366Z * [new branch] gh/c00w/58/head -> origin/gh/c00w/58/head 2025-12-04T10:14:41.2195429Z * [new branch] gh/c00w/58/orig -> origin/gh/c00w/58/orig 2025-12-04T10:14:41.2195501Z * [new branch] gh/clee2000/1/base -> origin/gh/clee2000/1/base 2025-12-04T10:14:41.2195571Z * [new branch] gh/clee2000/1/head -> origin/gh/clee2000/1/head 2025-12-04T10:14:41.2195640Z * [new branch] gh/clee2000/1/orig -> origin/gh/clee2000/1/orig 2025-12-04T10:14:41.2195717Z * [new branch] gh/coconutruben/1/base -> origin/gh/coconutruben/1/base 2025-12-04T10:14:41.2195795Z * [new branch] gh/coconutruben/1/head -> origin/gh/coconutruben/1/head 2025-12-04T10:14:41.2195874Z * [new branch] gh/coconutruben/55/base -> origin/gh/coconutruben/55/base 2025-12-04T10:14:41.2195954Z * [new branch] gh/coconutruben/55/head -> origin/gh/coconutruben/55/head 2025-12-04T10:14:41.2196035Z * [new branch] gh/coconutruben/55/orig -> origin/gh/coconutruben/55/orig 2025-12-04T10:14:41.2196114Z * [new branch] gh/coconutruben/57/base -> origin/gh/coconutruben/57/base 2025-12-04T10:14:41.2196189Z * [new branch] gh/coconutruben/57/head -> origin/gh/coconutruben/57/head 2025-12-04T10:14:41.2196264Z * [new branch] gh/coconutruben/57/orig -> origin/gh/coconutruben/57/orig 2025-12-04T10:14:41.2196339Z * [new branch] gh/coconutruben/70/base -> origin/gh/coconutruben/70/base 2025-12-04T10:14:41.2196413Z * [new branch] gh/coconutruben/70/head -> origin/gh/coconutruben/70/head 2025-12-04T10:14:41.2196524Z * [new branch] gh/coconutruben/70/orig -> origin/gh/coconutruben/70/orig 2025-12-04T10:14:41.2196599Z * [new branch] gh/coconutruben/71/base -> origin/gh/coconutruben/71/base 2025-12-04T10:14:41.2196674Z * [new branch] gh/coconutruben/71/head -> origin/gh/coconutruben/71/head 2025-12-04T10:14:41.2196751Z * [new branch] gh/coconutruben/71/orig -> origin/gh/coconutruben/71/orig 2025-12-04T10:14:41.2196825Z * [new branch] gh/coconutruben/72/base -> origin/gh/coconutruben/72/base 2025-12-04T10:14:41.2196900Z * [new branch] gh/coconutruben/72/head -> origin/gh/coconutruben/72/head 2025-12-04T10:14:41.2196976Z * [new branch] gh/coconutruben/72/orig -> origin/gh/coconutruben/72/orig 2025-12-04T10:14:41.2197050Z * [new branch] gh/coconutruben/73/base -> origin/gh/coconutruben/73/base 2025-12-04T10:14:41.2197126Z * [new branch] gh/coconutruben/73/head -> origin/gh/coconutruben/73/head 2025-12-04T10:14:41.2197201Z * [new branch] gh/coconutruben/73/orig -> origin/gh/coconutruben/73/orig 2025-12-04T10:14:41.2197275Z * [new branch] gh/coconutruben/74/base -> origin/gh/coconutruben/74/base 2025-12-04T10:14:41.2197349Z * [new branch] gh/coconutruben/74/head -> origin/gh/coconutruben/74/head 2025-12-04T10:14:41.2197452Z * [new branch] gh/coconutruben/74/orig -> origin/gh/coconutruben/74/orig 2025-12-04T10:14:41.2197526Z * [new branch] gh/coconutruben/79/base -> origin/gh/coconutruben/79/base 2025-12-04T10:14:41.2197601Z * [new branch] gh/coconutruben/79/head -> origin/gh/coconutruben/79/head 2025-12-04T10:14:41.2197678Z * [new branch] gh/coconutruben/79/orig -> origin/gh/coconutruben/79/orig 2025-12-04T10:14:41.2197753Z * [new branch] gh/coconutruben/80/base -> origin/gh/coconutruben/80/base 2025-12-04T10:14:41.2197829Z * [new branch] gh/coconutruben/80/head -> origin/gh/coconutruben/80/head 2025-12-04T10:14:41.2197905Z * [new branch] gh/coconutruben/80/orig -> origin/gh/coconutruben/80/orig 2025-12-04T10:14:41.2197980Z * [new branch] gh/coconutruben/82/base -> origin/gh/coconutruben/82/base 2025-12-04T10:14:41.2198208Z * [new branch] gh/coconutruben/82/head -> origin/gh/coconutruben/82/head 2025-12-04T10:14:41.2198283Z * [new branch] gh/coconutruben/82/orig -> origin/gh/coconutruben/82/orig 2025-12-04T10:14:41.2198358Z * [new branch] gh/coconutruben/83/base -> origin/gh/coconutruben/83/base 2025-12-04T10:14:41.2198433Z * [new branch] gh/coconutruben/83/head -> origin/gh/coconutruben/83/head 2025-12-04T10:14:41.2198508Z * [new branch] gh/coconutruben/83/orig -> origin/gh/coconutruben/83/orig 2025-12-04T10:14:41.2198583Z * [new branch] gh/coconutruben/84/base -> origin/gh/coconutruben/84/base 2025-12-04T10:14:41.2198661Z * [new branch] gh/coconutruben/84/head -> origin/gh/coconutruben/84/head 2025-12-04T10:14:41.2198735Z * [new branch] gh/coconutruben/84/orig -> origin/gh/coconutruben/84/orig 2025-12-04T10:14:41.2198810Z * [new branch] gh/coconutruben/85/base -> origin/gh/coconutruben/85/base 2025-12-04T10:14:41.2198887Z * [new branch] gh/coconutruben/85/head -> origin/gh/coconutruben/85/head 2025-12-04T10:14:41.2198964Z * [new branch] gh/coconutruben/85/orig -> origin/gh/coconutruben/85/orig 2025-12-04T10:14:41.2199038Z * [new branch] gh/coconutruben/86/base -> origin/gh/coconutruben/86/base 2025-12-04T10:14:41.2199115Z * [new branch] gh/coconutruben/86/head -> origin/gh/coconutruben/86/head 2025-12-04T10:14:41.2199190Z * [new branch] gh/coconutruben/86/orig -> origin/gh/coconutruben/86/orig 2025-12-04T10:14:41.2199301Z * [new branch] gh/colinchan15/1/base -> origin/gh/colinchan15/1/base 2025-12-04T10:14:41.2199378Z * [new branch] gh/colinchan15/1/head -> origin/gh/colinchan15/1/head 2025-12-04T10:14:41.2199452Z * [new branch] gh/colinchan15/2/base -> origin/gh/colinchan15/2/base 2025-12-04T10:14:41.2199526Z * [new branch] gh/colinchan15/2/head -> origin/gh/colinchan15/2/head 2025-12-04T10:14:41.2199605Z * [new branch] gh/colinchan15/3/base -> origin/gh/colinchan15/3/base 2025-12-04T10:14:41.2199682Z * [new branch] gh/colinchan15/3/head -> origin/gh/colinchan15/3/head 2025-12-04T10:14:41.2199756Z * [new branch] gh/colinchan15/6/base -> origin/gh/colinchan15/6/base 2025-12-04T10:14:41.2199831Z * [new branch] gh/colinchan15/6/head -> origin/gh/colinchan15/6/head 2025-12-04T10:14:41.2199897Z * [new branch] gh/d4l3k/1/base -> origin/gh/d4l3k/1/base 2025-12-04T10:14:41.2199965Z * [new branch] gh/d4l3k/1/head -> origin/gh/d4l3k/1/head 2025-12-04T10:14:41.2200030Z * [new branch] gh/d4l3k/2/base -> origin/gh/d4l3k/2/base 2025-12-04T10:14:41.2200094Z * [new branch] gh/d4l3k/2/head -> origin/gh/d4l3k/2/head 2025-12-04T10:14:41.2200157Z * [new branch] gh/d4l3k/2/orig -> origin/gh/d4l3k/2/orig 2025-12-04T10:14:41.2200252Z * [new branch] gh/d4l3k/3/base -> origin/gh/d4l3k/3/base 2025-12-04T10:14:41.2200315Z * [new branch] gh/d4l3k/3/head -> origin/gh/d4l3k/3/head 2025-12-04T10:14:41.2200379Z * [new branch] gh/d4l3k/3/orig -> origin/gh/d4l3k/3/orig 2025-12-04T10:14:41.2200442Z * [new branch] gh/d4l3k/4/base -> origin/gh/d4l3k/4/base 2025-12-04T10:14:41.2200504Z * [new branch] gh/d4l3k/4/head -> origin/gh/d4l3k/4/head 2025-12-04T10:14:41.2200568Z * [new branch] gh/d4l3k/4/orig -> origin/gh/d4l3k/4/orig 2025-12-04T10:14:41.2200661Z * [new branch] gh/d4l3k/5/base -> origin/gh/d4l3k/5/base 2025-12-04T10:14:41.2200725Z * [new branch] gh/d4l3k/5/orig -> origin/gh/d4l3k/5/orig 2025-12-04T10:14:41.2200813Z * [new branch] gh/davidberard98/392/base -> origin/gh/davidberard98/392/base 2025-12-04T10:14:41.2200898Z * [new branch] gh/davidberard98/392/head -> origin/gh/davidberard98/392/head 2025-12-04T10:14:41.2200984Z * [new branch] gh/davidberard98/392/orig -> origin/gh/davidberard98/392/orig 2025-12-04T10:14:41.2201068Z * [new branch] gh/davidberard98/399/base -> origin/gh/davidberard98/399/base 2025-12-04T10:14:41.2201151Z * [new branch] gh/davidberard98/399/head -> origin/gh/davidberard98/399/head 2025-12-04T10:14:41.2201233Z * [new branch] gh/davidberard98/399/orig -> origin/gh/davidberard98/399/orig 2025-12-04T10:14:41.2201312Z * [new branch] gh/desertfire/605/base -> origin/gh/desertfire/605/base 2025-12-04T10:14:41.2201388Z * [new branch] gh/desertfire/605/head -> origin/gh/desertfire/605/head 2025-12-04T10:14:41.2201463Z * [new branch] gh/desertfire/605/orig -> origin/gh/desertfire/605/orig 2025-12-04T10:14:41.2201540Z * [new branch] gh/desertfire/606/base -> origin/gh/desertfire/606/base 2025-12-04T10:14:41.2201615Z * [new branch] gh/desertfire/606/head -> origin/gh/desertfire/606/head 2025-12-04T10:14:41.2201689Z * [new branch] gh/desertfire/606/orig -> origin/gh/desertfire/606/orig 2025-12-04T10:14:41.2201763Z * [new branch] gh/desertfire/607/base -> origin/gh/desertfire/607/base 2025-12-04T10:14:41.2201835Z * [new branch] gh/desertfire/607/head -> origin/gh/desertfire/607/head 2025-12-04T10:14:41.2201910Z * [new branch] gh/desertfire/607/orig -> origin/gh/desertfire/607/orig 2025-12-04T10:14:41.2202032Z * [new branch] gh/desertfire/608/base -> origin/gh/desertfire/608/base 2025-12-04T10:14:41.2202105Z * [new branch] gh/desertfire/608/head -> origin/gh/desertfire/608/head 2025-12-04T10:14:41.2202179Z * [new branch] gh/desertfire/608/orig -> origin/gh/desertfire/608/orig 2025-12-04T10:14:41.2202254Z * [new branch] gh/desertfire/609/base -> origin/gh/desertfire/609/base 2025-12-04T10:14:41.2202327Z * [new branch] gh/desertfire/609/head -> origin/gh/desertfire/609/head 2025-12-04T10:14:41.2202402Z * [new branch] gh/desertfire/609/orig -> origin/gh/desertfire/609/orig 2025-12-04T10:14:41.2202475Z * [new branch] gh/desertfire/610/base -> origin/gh/desertfire/610/base 2025-12-04T10:14:41.2202549Z * [new branch] gh/desertfire/610/head -> origin/gh/desertfire/610/head 2025-12-04T10:14:41.2202626Z * [new branch] gh/desertfire/610/orig -> origin/gh/desertfire/610/orig 2025-12-04T10:14:41.2202702Z * [new branch] gh/desertfire/611/base -> origin/gh/desertfire/611/base 2025-12-04T10:14:41.2202777Z * [new branch] gh/desertfire/611/head -> origin/gh/desertfire/611/head 2025-12-04T10:14:41.2202853Z * [new branch] gh/desertfire/611/orig -> origin/gh/desertfire/611/orig 2025-12-04T10:14:41.2202971Z * [new branch] gh/desertfire/612/base -> origin/gh/desertfire/612/base 2025-12-04T10:14:41.2203046Z * [new branch] gh/desertfire/612/head -> origin/gh/desertfire/612/head 2025-12-04T10:14:41.2203121Z * [new branch] gh/desertfire/612/orig -> origin/gh/desertfire/612/orig 2025-12-04T10:14:41.2203195Z * [new branch] gh/desertfire/613/base -> origin/gh/desertfire/613/base 2025-12-04T10:14:41.2203271Z * [new branch] gh/desertfire/613/head -> origin/gh/desertfire/613/head 2025-12-04T10:14:41.2203344Z * [new branch] gh/desertfire/613/orig -> origin/gh/desertfire/613/orig 2025-12-04T10:14:41.2203420Z * [new branch] gh/desertfire/614/base -> origin/gh/desertfire/614/base 2025-12-04T10:14:41.2203497Z * [new branch] gh/desertfire/614/head -> origin/gh/desertfire/614/head 2025-12-04T10:14:41.2203570Z * [new branch] gh/desertfire/614/orig -> origin/gh/desertfire/614/orig 2025-12-04T10:14:41.2203645Z * [new branch] gh/desertfire/615/base -> origin/gh/desertfire/615/base 2025-12-04T10:14:41.2203720Z * [new branch] gh/desertfire/615/head -> origin/gh/desertfire/615/head 2025-12-04T10:14:41.2203794Z * [new branch] gh/desertfire/615/orig -> origin/gh/desertfire/615/orig 2025-12-04T10:14:41.2203868Z * [new branch] gh/desertfire/616/base -> origin/gh/desertfire/616/base 2025-12-04T10:14:41.2203944Z * [new branch] gh/desertfire/616/head -> origin/gh/desertfire/616/head 2025-12-04T10:14:41.2204020Z * [new branch] gh/desertfire/616/orig -> origin/gh/desertfire/616/orig 2025-12-04T10:14:41.2204095Z * [new branch] gh/desertfire/617/base -> origin/gh/desertfire/617/base 2025-12-04T10:14:41.2204172Z * [new branch] gh/desertfire/617/head -> origin/gh/desertfire/617/head 2025-12-04T10:14:41.2204245Z * [new branch] gh/desertfire/617/orig -> origin/gh/desertfire/617/orig 2025-12-04T10:14:41.2204319Z * [new branch] gh/dharakk/1/base -> origin/gh/dharakk/1/base 2025-12-04T10:14:41.2204392Z * [new branch] gh/dharakk/1/head -> origin/gh/dharakk/1/head 2025-12-04T10:14:41.2204465Z * [new branch] gh/drisspg/170/base -> origin/gh/drisspg/170/base 2025-12-04T10:14:41.2204538Z * [new branch] gh/drisspg/170/head -> origin/gh/drisspg/170/head 2025-12-04T10:14:41.2204611Z * [new branch] gh/drisspg/170/orig -> origin/gh/drisspg/170/orig 2025-12-04T10:14:41.2204714Z * [new branch] gh/drisspg/182/base -> origin/gh/drisspg/182/base 2025-12-04T10:14:41.2204784Z * [new branch] gh/drisspg/182/head -> origin/gh/drisspg/182/head 2025-12-04T10:14:41.2204855Z * [new branch] gh/drisspg/183/base -> origin/gh/drisspg/183/base 2025-12-04T10:14:41.2204924Z * [new branch] gh/drisspg/183/head -> origin/gh/drisspg/183/head 2025-12-04T10:14:41.2204995Z * [new branch] gh/drisspg/184/base -> origin/gh/drisspg/184/base 2025-12-04T10:14:41.2205064Z * [new branch] gh/drisspg/184/head -> origin/gh/drisspg/184/head 2025-12-04T10:14:41.2205133Z * [new branch] gh/drisspg/185/base -> origin/gh/drisspg/185/base 2025-12-04T10:14:41.2205204Z * [new branch] gh/drisspg/185/head -> origin/gh/drisspg/185/head 2025-12-04T10:14:41.2205273Z * [new branch] gh/drisspg/194/base -> origin/gh/drisspg/194/base 2025-12-04T10:14:41.2205344Z * [new branch] gh/drisspg/194/head -> origin/gh/drisspg/194/head 2025-12-04T10:14:41.2205416Z * [new branch] gh/drisspg/194/orig -> origin/gh/drisspg/194/orig 2025-12-04T10:14:41.2205485Z * [new branch] gh/drisspg/200/base -> origin/gh/drisspg/200/base 2025-12-04T10:14:41.2205555Z * [new branch] gh/drisspg/200/head -> origin/gh/drisspg/200/head 2025-12-04T10:14:41.2205650Z * [new branch] gh/drisspg/200/orig -> origin/gh/drisspg/200/orig 2025-12-04T10:14:41.2205720Z * [new branch] gh/drisspg/218/base -> origin/gh/drisspg/218/base 2025-12-04T10:14:41.2205789Z * [new branch] gh/drisspg/218/head -> origin/gh/drisspg/218/head 2025-12-04T10:14:41.2205860Z * [new branch] gh/drisspg/218/orig -> origin/gh/drisspg/218/orig 2025-12-04T10:14:41.2205930Z * [new branch] gh/drisspg/219/base -> origin/gh/drisspg/219/base 2025-12-04T10:14:41.2206000Z * [new branch] gh/drisspg/219/head -> origin/gh/drisspg/219/head 2025-12-04T10:14:41.2206072Z * [new branch] gh/drisspg/219/orig -> origin/gh/drisspg/219/orig 2025-12-04T10:14:41.2206141Z * [new branch] gh/drisspg/220/base -> origin/gh/drisspg/220/base 2025-12-04T10:14:41.2206210Z * [new branch] gh/drisspg/220/head -> origin/gh/drisspg/220/head 2025-12-04T10:14:41.2206283Z * [new branch] gh/drisspg/220/orig -> origin/gh/drisspg/220/orig 2025-12-04T10:14:41.2206352Z * [new branch] gh/drisspg/221/base -> origin/gh/drisspg/221/base 2025-12-04T10:14:41.2206421Z * [new branch] gh/drisspg/221/head -> origin/gh/drisspg/221/head 2025-12-04T10:14:41.2206491Z * [new branch] gh/drisspg/221/orig -> origin/gh/drisspg/221/orig 2025-12-04T10:14:41.2206561Z * [new branch] gh/drisspg/222/base -> origin/gh/drisspg/222/base 2025-12-04T10:14:41.2206633Z * [new branch] gh/drisspg/222/head -> origin/gh/drisspg/222/head 2025-12-04T10:14:41.2206702Z * [new branch] gh/drisspg/222/orig -> origin/gh/drisspg/222/orig 2025-12-04T10:14:41.2206770Z * [new branch] gh/drisspg/223/base -> origin/gh/drisspg/223/base 2025-12-04T10:14:41.2206842Z * [new branch] gh/drisspg/223/head -> origin/gh/drisspg/223/head 2025-12-04T10:14:41.2206914Z * [new branch] gh/drisspg/223/orig -> origin/gh/drisspg/223/orig 2025-12-04T10:14:41.2206983Z * [new branch] gh/drisspg/224/base -> origin/gh/drisspg/224/base 2025-12-04T10:14:41.2207057Z * [new branch] gh/drisspg/224/head -> origin/gh/drisspg/224/head 2025-12-04T10:14:41.2207127Z * [new branch] gh/drisspg/224/orig -> origin/gh/drisspg/224/orig 2025-12-04T10:14:41.2207196Z * [new branch] gh/drisspg/225/base -> origin/gh/drisspg/225/base 2025-12-04T10:14:41.2207296Z * [new branch] gh/drisspg/225/head -> origin/gh/drisspg/225/head 2025-12-04T10:14:41.2207365Z * [new branch] gh/drisspg/225/orig -> origin/gh/drisspg/225/orig 2025-12-04T10:14:41.2207434Z * [new branch] gh/drisspg/226/base -> origin/gh/drisspg/226/base 2025-12-04T10:14:41.2207507Z * [new branch] gh/drisspg/226/head -> origin/gh/drisspg/226/head 2025-12-04T10:14:41.2207576Z * [new branch] gh/drisspg/226/orig -> origin/gh/drisspg/226/orig 2025-12-04T10:14:41.2207645Z * [new branch] gh/drisspg/227/base -> origin/gh/drisspg/227/base 2025-12-04T10:14:41.2207717Z * [new branch] gh/drisspg/227/head -> origin/gh/drisspg/227/head 2025-12-04T10:14:41.2207787Z * [new branch] gh/drisspg/227/orig -> origin/gh/drisspg/227/orig 2025-12-04T10:14:41.2207856Z * [new branch] gh/drisspg/228/base -> origin/gh/drisspg/228/base 2025-12-04T10:14:41.2207929Z * [new branch] gh/drisspg/228/head -> origin/gh/drisspg/228/head 2025-12-04T10:14:41.2207998Z * [new branch] gh/drisspg/228/orig -> origin/gh/drisspg/228/orig 2025-12-04T10:14:41.2208067Z * [new branch] gh/drisspg/229/base -> origin/gh/drisspg/229/base 2025-12-04T10:14:41.2208140Z * [new branch] gh/drisspg/229/head -> origin/gh/drisspg/229/head 2025-12-04T10:14:41.2208234Z * [new branch] gh/drisspg/229/orig -> origin/gh/drisspg/229/orig 2025-12-04T10:14:41.2208306Z * [new branch] gh/drisspg/230/base -> origin/gh/drisspg/230/base 2025-12-04T10:14:41.2208375Z * [new branch] gh/drisspg/230/head -> origin/gh/drisspg/230/head 2025-12-04T10:14:41.2208446Z * [new branch] gh/drisspg/230/orig -> origin/gh/drisspg/230/orig 2025-12-04T10:14:41.2208521Z * [new branch] gh/dsjohns2/1/base -> origin/gh/dsjohns2/1/base 2025-12-04T10:14:41.2208595Z * [new branch] gh/dsjohns2/1/head -> origin/gh/dsjohns2/1/head 2025-12-04T10:14:41.2208674Z * [new branch] gh/dzmitry-huba/1/base -> origin/gh/dzmitry-huba/1/base 2025-12-04T10:14:41.2208753Z * [new branch] gh/dzmitry-huba/1/head -> origin/gh/dzmitry-huba/1/head 2025-12-04T10:14:41.2208832Z * [new branch] gh/dzmitry-huba/12/base -> origin/gh/dzmitry-huba/12/base 2025-12-04T10:14:41.2208908Z * [new branch] gh/dzmitry-huba/12/head -> origin/gh/dzmitry-huba/12/head 2025-12-04T10:14:41.2208987Z * [new branch] gh/dzmitry-huba/12/orig -> origin/gh/dzmitry-huba/12/orig 2025-12-04T10:14:41.2209062Z * [new branch] gh/dzmitry-huba/13/base -> origin/gh/dzmitry-huba/13/base 2025-12-04T10:14:41.2209137Z * [new branch] gh/dzmitry-huba/13/head -> origin/gh/dzmitry-huba/13/head 2025-12-04T10:14:41.2209214Z * [new branch] gh/dzmitry-huba/13/orig -> origin/gh/dzmitry-huba/13/orig 2025-12-04T10:14:41.2209289Z * [new branch] gh/dzmitry-huba/14/base -> origin/gh/dzmitry-huba/14/base 2025-12-04T10:14:41.2209364Z * [new branch] gh/dzmitry-huba/14/head -> origin/gh/dzmitry-huba/14/head 2025-12-04T10:14:41.2209441Z * [new branch] gh/dzmitry-huba/14/orig -> origin/gh/dzmitry-huba/14/orig 2025-12-04T10:14:41.2209517Z * [new branch] gh/dzmitry-huba/15/base -> origin/gh/dzmitry-huba/15/base 2025-12-04T10:14:41.2209592Z * [new branch] gh/dzmitry-huba/15/head -> origin/gh/dzmitry-huba/15/head 2025-12-04T10:14:41.2209668Z * [new branch] gh/dzmitry-huba/15/orig -> origin/gh/dzmitry-huba/15/orig 2025-12-04T10:14:41.2209742Z * [new branch] gh/dzmitry-huba/16/base -> origin/gh/dzmitry-huba/16/base 2025-12-04T10:14:41.2209817Z * [new branch] gh/dzmitry-huba/16/head -> origin/gh/dzmitry-huba/16/head 2025-12-04T10:14:41.2209895Z * [new branch] gh/dzmitry-huba/16/orig -> origin/gh/dzmitry-huba/16/orig 2025-12-04T10:14:41.2209995Z * [new branch] gh/dzmitry-huba/17/base -> origin/gh/dzmitry-huba/17/base 2025-12-04T10:14:41.2210072Z * [new branch] gh/dzmitry-huba/17/head -> origin/gh/dzmitry-huba/17/head 2025-12-04T10:14:41.2210148Z * [new branch] gh/dzmitry-huba/17/orig -> origin/gh/dzmitry-huba/17/orig 2025-12-04T10:14:41.2210226Z * [new branch] gh/dzmitry-huba/2/base -> origin/gh/dzmitry-huba/2/base 2025-12-04T10:14:41.2210303Z * [new branch] gh/dzmitry-huba/2/head -> origin/gh/dzmitry-huba/2/head 2025-12-04T10:14:41.2210378Z * [new branch] gh/dzmitry-huba/3/base -> origin/gh/dzmitry-huba/3/base 2025-12-04T10:14:41.2210453Z * [new branch] gh/dzmitry-huba/3/head -> origin/gh/dzmitry-huba/3/head 2025-12-04T10:14:41.2210531Z * [new branch] gh/eellison/808/base -> origin/gh/eellison/808/base 2025-12-04T10:14:41.2210644Z * [new branch] gh/eellison/808/head -> origin/gh/eellison/808/head 2025-12-04T10:14:41.2210718Z * [new branch] gh/eellison/808/orig -> origin/gh/eellison/808/orig 2025-12-04T10:14:41.2210792Z * [new branch] gh/eellison/822/base -> origin/gh/eellison/822/base 2025-12-04T10:14:41.2210862Z * [new branch] gh/eellison/822/head -> origin/gh/eellison/822/head 2025-12-04T10:14:41.2210975Z * [new branch] gh/eellison/822/orig -> origin/gh/eellison/822/orig 2025-12-04T10:14:41.2211049Z * [new branch] gh/eellison/823/base -> origin/gh/eellison/823/base 2025-12-04T10:14:41.2211119Z * [new branch] gh/eellison/823/head -> origin/gh/eellison/823/head 2025-12-04T10:14:41.2211189Z * [new branch] gh/eellison/823/orig -> origin/gh/eellison/823/orig 2025-12-04T10:14:41.2211262Z * [new branch] gh/eellison/862/base -> origin/gh/eellison/862/base 2025-12-04T10:14:41.2211333Z * [new branch] gh/eellison/862/head -> origin/gh/eellison/862/head 2025-12-04T10:14:41.2211405Z * [new branch] gh/eellison/862/orig -> origin/gh/eellison/862/orig 2025-12-04T10:14:41.2211480Z * [new branch] gh/eellison/863/base -> origin/gh/eellison/863/base 2025-12-04T10:14:41.2211551Z * [new branch] gh/eellison/863/head -> origin/gh/eellison/863/head 2025-12-04T10:14:41.2211625Z * [new branch] gh/eellison/863/orig -> origin/gh/eellison/863/orig 2025-12-04T10:14:41.2211695Z * [new branch] gh/eellison/864/base -> origin/gh/eellison/864/base 2025-12-04T10:14:41.2211765Z * [new branch] gh/eellison/864/head -> origin/gh/eellison/864/head 2025-12-04T10:14:41.2211840Z * [new branch] gh/eellison/864/orig -> origin/gh/eellison/864/orig 2025-12-04T10:14:41.2211910Z * [new branch] gh/eellison/865/base -> origin/gh/eellison/865/base 2025-12-04T10:14:41.2211983Z * [new branch] gh/eellison/865/head -> origin/gh/eellison/865/head 2025-12-04T10:14:41.2212056Z * [new branch] gh/eellison/865/orig -> origin/gh/eellison/865/orig 2025-12-04T10:14:41.2212127Z * [new branch] gh/eellison/866/base -> origin/gh/eellison/866/base 2025-12-04T10:14:41.2212198Z * [new branch] gh/eellison/866/head -> origin/gh/eellison/866/head 2025-12-04T10:14:41.2212271Z * [new branch] gh/eellison/866/orig -> origin/gh/eellison/866/orig 2025-12-04T10:14:41.2212341Z * [new branch] gh/eellison/867/base -> origin/gh/eellison/867/base 2025-12-04T10:14:41.2212412Z * [new branch] gh/eellison/867/head -> origin/gh/eellison/867/head 2025-12-04T10:14:41.2212485Z * [new branch] gh/eellison/867/orig -> origin/gh/eellison/867/orig 2025-12-04T10:14:41.2212556Z * [new branch] gh/eellison/868/base -> origin/gh/eellison/868/base 2025-12-04T10:14:41.2212667Z * [new branch] gh/eellison/868/head -> origin/gh/eellison/868/head 2025-12-04T10:14:41.2212741Z * [new branch] gh/eellison/868/orig -> origin/gh/eellison/868/orig 2025-12-04T10:14:41.2212811Z * [new branch] gh/eellison/869/base -> origin/gh/eellison/869/base 2025-12-04T10:14:41.2212883Z * [new branch] gh/eellison/869/head -> origin/gh/eellison/869/head 2025-12-04T10:14:41.2212956Z * [new branch] gh/eellison/869/orig -> origin/gh/eellison/869/orig 2025-12-04T10:14:41.2213028Z * [new branch] gh/eellison/870/base -> origin/gh/eellison/870/base 2025-12-04T10:14:41.2213102Z * [new branch] gh/eellison/870/head -> origin/gh/eellison/870/head 2025-12-04T10:14:41.2213173Z * [new branch] gh/eellison/870/orig -> origin/gh/eellison/870/orig 2025-12-04T10:14:41.2213243Z * [new branch] gh/eellison/871/base -> origin/gh/eellison/871/base 2025-12-04T10:14:41.2213317Z * [new branch] gh/eellison/871/head -> origin/gh/eellison/871/head 2025-12-04T10:14:41.2213388Z * [new branch] gh/eellison/871/orig -> origin/gh/eellison/871/orig 2025-12-04T10:14:41.2213458Z * [new branch] gh/eellison/872/base -> origin/gh/eellison/872/base 2025-12-04T10:14:41.2213558Z * [new branch] gh/eellison/872/head -> origin/gh/eellison/872/head 2025-12-04T10:14:41.2213630Z * [new branch] gh/eellison/872/orig -> origin/gh/eellison/872/orig 2025-12-04T10:14:41.2213702Z * [new branch] gh/eellison/873/base -> origin/gh/eellison/873/base 2025-12-04T10:14:41.2213777Z * [new branch] gh/eellison/873/head -> origin/gh/eellison/873/head 2025-12-04T10:14:41.2213847Z * [new branch] gh/eellison/873/orig -> origin/gh/eellison/873/orig 2025-12-04T10:14:41.2213918Z * [new branch] gh/eellison/874/base -> origin/gh/eellison/874/base 2025-12-04T10:14:41.2213994Z * [new branch] gh/eellison/874/head -> origin/gh/eellison/874/head 2025-12-04T10:14:41.2214064Z * [new branch] gh/eellison/874/orig -> origin/gh/eellison/874/orig 2025-12-04T10:14:41.2214135Z * [new branch] gh/eellison/875/base -> origin/gh/eellison/875/base 2025-12-04T10:14:41.2214211Z * [new branch] gh/eellison/875/head -> origin/gh/eellison/875/head 2025-12-04T10:14:41.2214282Z * [new branch] gh/eellison/875/orig -> origin/gh/eellison/875/orig 2025-12-04T10:14:41.2214353Z * [new branch] gh/eellison/876/base -> origin/gh/eellison/876/base 2025-12-04T10:14:41.2214427Z * [new branch] gh/eellison/876/head -> origin/gh/eellison/876/head 2025-12-04T10:14:41.2214498Z * [new branch] gh/eellison/876/orig -> origin/gh/eellison/876/orig 2025-12-04T10:14:41.2214569Z * [new branch] gh/eellison/877/base -> origin/gh/eellison/877/base 2025-12-04T10:14:41.2214644Z * [new branch] gh/eellison/877/head -> origin/gh/eellison/877/head 2025-12-04T10:14:41.2214714Z * [new branch] gh/eellison/877/orig -> origin/gh/eellison/877/orig 2025-12-04T10:14:41.2214787Z * [new branch] gh/eellison/878/base -> origin/gh/eellison/878/base 2025-12-04T10:14:41.2214860Z * [new branch] gh/eellison/878/head -> origin/gh/eellison/878/head 2025-12-04T10:14:41.2214931Z * [new branch] gh/eellison/878/orig -> origin/gh/eellison/878/orig 2025-12-04T10:14:41.2215004Z * [new branch] gh/eellison/879/base -> origin/gh/eellison/879/base 2025-12-04T10:14:41.2215075Z * [new branch] gh/eellison/879/head -> origin/gh/eellison/879/head 2025-12-04T10:14:41.2215145Z * [new branch] gh/eellison/879/orig -> origin/gh/eellison/879/orig 2025-12-04T10:14:41.2215219Z * [new branch] gh/eellison/880/base -> origin/gh/eellison/880/base 2025-12-04T10:14:41.2215325Z * [new branch] gh/eellison/880/head -> origin/gh/eellison/880/head 2025-12-04T10:14:41.2215397Z * [new branch] gh/eellison/880/orig -> origin/gh/eellison/880/orig 2025-12-04T10:14:41.2215470Z * [new branch] gh/eellison/881/base -> origin/gh/eellison/881/base 2025-12-04T10:14:41.2215542Z * [new branch] gh/eellison/881/head -> origin/gh/eellison/881/head 2025-12-04T10:14:41.2215614Z * [new branch] gh/eellison/881/orig -> origin/gh/eellison/881/orig 2025-12-04T10:14:41.2215686Z * [new branch] gh/eellison/882/base -> origin/gh/eellison/882/base 2025-12-04T10:14:41.2215757Z * [new branch] gh/eellison/882/head -> origin/gh/eellison/882/head 2025-12-04T10:14:41.2215827Z * [new branch] gh/eellison/882/orig -> origin/gh/eellison/882/orig 2025-12-04T10:14:41.2215901Z * [new branch] gh/eellison/883/base -> origin/gh/eellison/883/base 2025-12-04T10:14:41.2215974Z * [new branch] gh/eellison/883/head -> origin/gh/eellison/883/head 2025-12-04T10:14:41.2216046Z * [new branch] gh/eellison/883/orig -> origin/gh/eellison/883/orig 2025-12-04T10:14:41.2216119Z * [new branch] gh/eellison/884/base -> origin/gh/eellison/884/base 2025-12-04T10:14:41.2216213Z * [new branch] gh/eellison/884/head -> origin/gh/eellison/884/head 2025-12-04T10:14:41.2216286Z * [new branch] gh/eellison/884/orig -> origin/gh/eellison/884/orig 2025-12-04T10:14:41.2216355Z * [new branch] gh/etaf/147/base -> origin/gh/etaf/147/base 2025-12-04T10:14:41.2216422Z * [new branch] gh/etaf/147/head -> origin/gh/etaf/147/head 2025-12-04T10:14:41.2216492Z * [new branch] gh/etaf/154/base -> origin/gh/etaf/154/base 2025-12-04T10:14:41.2216557Z * [new branch] gh/etaf/154/head -> origin/gh/etaf/154/head 2025-12-04T10:14:41.2216624Z * [new branch] gh/etaf/154/orig -> origin/gh/etaf/154/orig 2025-12-04T10:14:41.2216690Z * [new branch] gh/etaf/156/base -> origin/gh/etaf/156/base 2025-12-04T10:14:41.2216755Z * [new branch] gh/etaf/156/head -> origin/gh/etaf/156/head 2025-12-04T10:14:41.2216821Z * [new branch] gh/etaf/156/orig -> origin/gh/etaf/156/orig 2025-12-04T10:14:41.2216889Z * [new branch] gh/etaf/157/base -> origin/gh/etaf/157/base 2025-12-04T10:14:41.2216953Z * [new branch] gh/etaf/157/head -> origin/gh/etaf/157/head 2025-12-04T10:14:41.2217018Z * [new branch] gh/etaf/157/orig -> origin/gh/etaf/157/orig 2025-12-04T10:14:41.2217084Z * [new branch] gh/etaf/158/base -> origin/gh/etaf/158/base 2025-12-04T10:14:41.2217148Z * [new branch] gh/etaf/158/head -> origin/gh/etaf/158/head 2025-12-04T10:14:41.2217215Z * [new branch] gh/etaf/158/orig -> origin/gh/etaf/158/orig 2025-12-04T10:14:41.2217282Z * [new branch] gh/etaf/159/base -> origin/gh/etaf/159/base 2025-12-04T10:14:41.2217347Z * [new branch] gh/etaf/159/head -> origin/gh/etaf/159/head 2025-12-04T10:14:41.2217415Z * [new branch] gh/etaf/159/orig -> origin/gh/etaf/159/orig 2025-12-04T10:14:41.2217483Z * [new branch] gh/etaf/160/base -> origin/gh/etaf/160/base 2025-12-04T10:14:41.2217548Z * [new branch] gh/etaf/160/head -> origin/gh/etaf/160/head 2025-12-04T10:14:41.2217613Z * [new branch] gh/etaf/160/orig -> origin/gh/etaf/160/orig 2025-12-04T10:14:41.2217680Z * [new branch] gh/etaf/161/base -> origin/gh/etaf/161/base 2025-12-04T10:14:41.2217745Z * [new branch] gh/etaf/161/head -> origin/gh/etaf/161/head 2025-12-04T10:14:41.2217837Z * [new branch] gh/etaf/161/orig -> origin/gh/etaf/161/orig 2025-12-04T10:14:41.2217902Z * [new branch] gh/etaf/166/base -> origin/gh/etaf/166/base 2025-12-04T10:14:41.2217966Z * [new branch] gh/etaf/166/head -> origin/gh/etaf/166/head 2025-12-04T10:14:41.2218033Z * [new branch] gh/etaf/166/orig -> origin/gh/etaf/166/orig 2025-12-04T10:14:41.2218098Z * [new branch] gh/etaf/167/base -> origin/gh/etaf/167/base 2025-12-04T10:14:41.2218163Z * [new branch] gh/etaf/167/head -> origin/gh/etaf/167/head 2025-12-04T10:14:41.2218230Z * [new branch] gh/etaf/167/orig -> origin/gh/etaf/167/orig 2025-12-04T10:14:41.2218295Z * [new branch] gh/etaf/168/base -> origin/gh/etaf/168/base 2025-12-04T10:14:41.2218360Z * [new branch] gh/etaf/168/head -> origin/gh/etaf/168/head 2025-12-04T10:14:41.2218428Z * [new branch] gh/etaf/168/orig -> origin/gh/etaf/168/orig 2025-12-04T10:14:41.2218494Z * [new branch] gh/etaf/172/base -> origin/gh/etaf/172/base 2025-12-04T10:14:41.2218560Z * [new branch] gh/etaf/172/head -> origin/gh/etaf/172/head 2025-12-04T10:14:41.2218628Z * [new branch] gh/etaf/172/orig -> origin/gh/etaf/172/orig 2025-12-04T10:14:41.2218727Z * [new branch] gh/etaf/173/base -> origin/gh/etaf/173/base 2025-12-04T10:14:41.2218792Z * [new branch] gh/etaf/173/head -> origin/gh/etaf/173/head 2025-12-04T10:14:41.2218865Z * [new branch] gh/etaf/173/orig -> origin/gh/etaf/173/orig 2025-12-04T10:14:41.2218930Z * [new branch] gh/etaf/174/base -> origin/gh/etaf/174/base 2025-12-04T10:14:41.2218995Z * [new branch] gh/etaf/174/head -> origin/gh/etaf/174/head 2025-12-04T10:14:41.2219063Z * [new branch] gh/etaf/175/base -> origin/gh/etaf/175/base 2025-12-04T10:14:41.2219127Z * [new branch] gh/etaf/175/head -> origin/gh/etaf/175/head 2025-12-04T10:14:41.2219192Z * [new branch] gh/etaf/175/orig -> origin/gh/etaf/175/orig 2025-12-04T10:14:41.2219258Z * [new branch] gh/etaf/176/base -> origin/gh/etaf/176/base 2025-12-04T10:14:41.2219324Z * [new branch] gh/etaf/176/head -> origin/gh/etaf/176/head 2025-12-04T10:14:41.2219392Z * [new branch] gh/etaf/176/orig -> origin/gh/etaf/176/orig 2025-12-04T10:14:41.2219457Z * [new branch] gh/etaf/177/base -> origin/gh/etaf/177/base 2025-12-04T10:14:41.2219522Z * [new branch] gh/etaf/177/head -> origin/gh/etaf/177/head 2025-12-04T10:14:41.2219589Z * [new branch] gh/etaf/177/orig -> origin/gh/etaf/177/orig 2025-12-04T10:14:41.2219654Z * [new branch] gh/etaf/178/base -> origin/gh/etaf/178/base 2025-12-04T10:14:41.2219720Z * [new branch] gh/etaf/178/head -> origin/gh/etaf/178/head 2025-12-04T10:14:41.2219787Z * [new branch] gh/etaf/178/orig -> origin/gh/etaf/178/orig 2025-12-04T10:14:41.2219854Z * [new branch] gh/etaf/179/base -> origin/gh/etaf/179/base 2025-12-04T10:14:41.2219920Z * [new branch] gh/etaf/179/head -> origin/gh/etaf/179/head 2025-12-04T10:14:41.2219987Z * [new branch] gh/etaf/179/orig -> origin/gh/etaf/179/orig 2025-12-04T10:14:41.2220052Z * [new branch] gh/etaf/180/base -> origin/gh/etaf/180/base 2025-12-04T10:14:41.2220118Z * [new branch] gh/etaf/180/head -> origin/gh/etaf/180/head 2025-12-04T10:14:41.2220186Z * [new branch] gh/etaf/180/orig -> origin/gh/etaf/180/orig 2025-12-04T10:14:41.2220268Z * [new branch] gh/exclamaforte/1/base -> origin/gh/exclamaforte/1/base 2025-12-04T10:14:41.2220376Z * [new branch] gh/exclamaforte/1/head -> origin/gh/exclamaforte/1/head 2025-12-04T10:14:41.2220455Z * [new branch] gh/exclamaforte/2/base -> origin/gh/exclamaforte/2/base 2025-12-04T10:14:41.2220532Z * [new branch] gh/exclamaforte/2/head -> origin/gh/exclamaforte/2/head 2025-12-04T10:14:41.2220647Z * [new branch] gh/exclamaforte/3/base -> origin/gh/exclamaforte/3/base 2025-12-04T10:14:41.2220729Z * [new branch] gh/exclamaforte/3/head -> origin/gh/exclamaforte/3/head 2025-12-04T10:14:41.2220804Z * [new branch] gh/exclamaforte/4/base -> origin/gh/exclamaforte/4/base 2025-12-04T10:14:41.2220881Z * [new branch] gh/exclamaforte/4/head -> origin/gh/exclamaforte/4/head 2025-12-04T10:14:41.2220955Z * [new branch] gh/ezyang/2374/base -> origin/gh/ezyang/2374/base 2025-12-04T10:14:41.2221028Z * [new branch] gh/ezyang/2374/head -> origin/gh/ezyang/2374/head 2025-12-04T10:14:41.2221103Z * [new branch] gh/ezyang/2374/orig -> origin/gh/ezyang/2374/orig 2025-12-04T10:14:41.2221171Z * [new branch] gh/ezyang/2973/base -> origin/gh/ezyang/2973/base 2025-12-04T10:14:41.2221238Z * [new branch] gh/ezyang/2973/head -> origin/gh/ezyang/2973/head 2025-12-04T10:14:41.2221353Z * [new branch] gh/ezyang/2973/orig -> origin/gh/ezyang/2973/orig 2025-12-04T10:14:41.2221422Z * [new branch] gh/ezyang/2974/base -> origin/gh/ezyang/2974/base 2025-12-04T10:14:41.2221490Z * [new branch] gh/ezyang/2974/head -> origin/gh/ezyang/2974/head 2025-12-04T10:14:41.2221560Z * [new branch] gh/ezyang/2974/orig -> origin/gh/ezyang/2974/orig 2025-12-04T10:14:41.2221628Z * [new branch] gh/ezyang/3131/base -> origin/gh/ezyang/3131/base 2025-12-04T10:14:41.2221696Z * [new branch] gh/ezyang/3131/head -> origin/gh/ezyang/3131/head 2025-12-04T10:14:41.2221771Z * [new branch] gh/ezyang/3131/orig -> origin/gh/ezyang/3131/orig 2025-12-04T10:14:41.2221838Z * [new branch] gh/ezyang/3139/base -> origin/gh/ezyang/3139/base 2025-12-04T10:14:41.2221907Z * [new branch] gh/ezyang/3139/head -> origin/gh/ezyang/3139/head 2025-12-04T10:14:41.2221979Z * [new branch] gh/ezyang/3139/orig -> origin/gh/ezyang/3139/orig 2025-12-04T10:14:41.2222046Z * [new branch] gh/ezyang/3140/base -> origin/gh/ezyang/3140/base 2025-12-04T10:14:41.2222114Z * [new branch] gh/ezyang/3140/head -> origin/gh/ezyang/3140/head 2025-12-04T10:14:41.2222184Z * [new branch] gh/ezyang/3140/orig -> origin/gh/ezyang/3140/orig 2025-12-04T10:14:41.2222253Z * [new branch] gh/ezyang/3143/base -> origin/gh/ezyang/3143/base 2025-12-04T10:14:41.2222321Z * [new branch] gh/ezyang/3143/head -> origin/gh/ezyang/3143/head 2025-12-04T10:14:41.2222395Z * [new branch] gh/ezyang/3143/orig -> origin/gh/ezyang/3143/orig 2025-12-04T10:14:41.2222462Z * [new branch] gh/ezyang/3144/base -> origin/gh/ezyang/3144/base 2025-12-04T10:14:41.2222530Z * [new branch] gh/ezyang/3144/head -> origin/gh/ezyang/3144/head 2025-12-04T10:14:41.2222602Z * [new branch] gh/ezyang/3144/orig -> origin/gh/ezyang/3144/orig 2025-12-04T10:14:41.2222671Z * [new branch] gh/ezyang/3167/base -> origin/gh/ezyang/3167/base 2025-12-04T10:14:41.2222738Z * [new branch] gh/ezyang/3167/head -> origin/gh/ezyang/3167/head 2025-12-04T10:14:41.2222811Z * [new branch] gh/ezyang/3167/orig -> origin/gh/ezyang/3167/orig 2025-12-04T10:14:41.2222880Z * [new branch] gh/ezyang/3173/base -> origin/gh/ezyang/3173/base 2025-12-04T10:14:41.2222950Z * [new branch] gh/ezyang/3173/head -> origin/gh/ezyang/3173/head 2025-12-04T10:14:41.2223064Z * [new branch] gh/ezyang/3173/orig -> origin/gh/ezyang/3173/orig 2025-12-04T10:14:41.2223134Z * [new branch] gh/ezyang/3175/base -> origin/gh/ezyang/3175/base 2025-12-04T10:14:41.2223205Z * [new branch] gh/ezyang/3175/head -> origin/gh/ezyang/3175/head 2025-12-04T10:14:41.2223274Z * [new branch] gh/ezyang/3175/orig -> origin/gh/ezyang/3175/orig 2025-12-04T10:14:41.2223345Z * [new branch] gh/ezyang/3182/base -> origin/gh/ezyang/3182/base 2025-12-04T10:14:41.2223415Z * [new branch] gh/ezyang/3182/head -> origin/gh/ezyang/3182/head 2025-12-04T10:14:41.2223482Z * [new branch] gh/ezyang/3182/orig -> origin/gh/ezyang/3182/orig 2025-12-04T10:14:41.2223550Z * [new branch] gh/ezyang/3185/base -> origin/gh/ezyang/3185/base 2025-12-04T10:14:41.2223619Z * [new branch] gh/ezyang/3185/head -> origin/gh/ezyang/3185/head 2025-12-04T10:14:41.2223688Z * [new branch] gh/ezyang/3185/orig -> origin/gh/ezyang/3185/orig 2025-12-04T10:14:41.2223757Z * [new branch] gh/ezyang/3189/base -> origin/gh/ezyang/3189/base 2025-12-04T10:14:41.2223830Z * [new branch] gh/ezyang/3189/head -> origin/gh/ezyang/3189/head 2025-12-04T10:14:41.2223935Z * [new branch] gh/ezyang/3189/orig -> origin/gh/ezyang/3189/orig 2025-12-04T10:14:41.2224003Z * [new branch] gh/ezyang/3191/base -> origin/gh/ezyang/3191/base 2025-12-04T10:14:41.2224073Z * [new branch] gh/ezyang/3191/head -> origin/gh/ezyang/3191/head 2025-12-04T10:14:41.2224143Z * [new branch] gh/ezyang/3191/orig -> origin/gh/ezyang/3191/orig 2025-12-04T10:14:41.2233206Z * [new branch] gh/ezyang/3192/base -> origin/gh/ezyang/3192/base 2025-12-04T10:14:41.2233310Z * [new branch] gh/ezyang/3192/head -> origin/gh/ezyang/3192/head 2025-12-04T10:14:41.2233391Z * [new branch] gh/ezyang/3192/orig -> origin/gh/ezyang/3192/orig 2025-12-04T10:14:41.2233461Z * [new branch] gh/ezyang/3193/base -> origin/gh/ezyang/3193/base 2025-12-04T10:14:41.2233532Z * [new branch] gh/ezyang/3193/head -> origin/gh/ezyang/3193/head 2025-12-04T10:14:41.2233608Z * [new branch] gh/ezyang/3193/orig -> origin/gh/ezyang/3193/orig 2025-12-04T10:14:41.2233679Z * [new branch] gh/ezyang/3194/base -> origin/gh/ezyang/3194/base 2025-12-04T10:14:41.2233747Z * [new branch] gh/ezyang/3194/head -> origin/gh/ezyang/3194/head 2025-12-04T10:14:41.2233821Z * [new branch] gh/ezyang/3194/orig -> origin/gh/ezyang/3194/orig 2025-12-04T10:14:41.2233891Z * [new branch] gh/ezyang/3195/base -> origin/gh/ezyang/3195/base 2025-12-04T10:14:41.2233961Z * [new branch] gh/ezyang/3195/head -> origin/gh/ezyang/3195/head 2025-12-04T10:14:41.2234034Z * [new branch] gh/ezyang/3195/orig -> origin/gh/ezyang/3195/orig 2025-12-04T10:14:41.2234102Z * [new branch] gh/ezyang/3196/base -> origin/gh/ezyang/3196/base 2025-12-04T10:14:41.2234176Z * [new branch] gh/ezyang/3196/head -> origin/gh/ezyang/3196/head 2025-12-04T10:14:41.2234251Z * [new branch] gh/ezyang/3196/orig -> origin/gh/ezyang/3196/orig 2025-12-04T10:14:41.2234319Z * [new branch] gh/ezyang/3197/base -> origin/gh/ezyang/3197/base 2025-12-04T10:14:41.2234387Z * [new branch] gh/ezyang/3197/head -> origin/gh/ezyang/3197/head 2025-12-04T10:14:41.2234461Z * [new branch] gh/ezyang/3197/orig -> origin/gh/ezyang/3197/orig 2025-12-04T10:14:41.2234528Z * [new branch] gh/ezyang/3198/base -> origin/gh/ezyang/3198/base 2025-12-04T10:14:41.2234597Z * [new branch] gh/ezyang/3198/head -> origin/gh/ezyang/3198/head 2025-12-04T10:14:41.2234729Z * [new branch] gh/ezyang/3198/orig -> origin/gh/ezyang/3198/orig 2025-12-04T10:14:41.2234797Z * [new branch] gh/ezyang/3199/base -> origin/gh/ezyang/3199/base 2025-12-04T10:14:41.2234867Z * [new branch] gh/ezyang/3199/head -> origin/gh/ezyang/3199/head 2025-12-04T10:14:41.2234937Z * [new branch] gh/ezyang/3199/orig -> origin/gh/ezyang/3199/orig 2025-12-04T10:14:41.2235007Z * [new branch] gh/ezyang/3200/base -> origin/gh/ezyang/3200/base 2025-12-04T10:14:41.2235077Z * [new branch] gh/ezyang/3200/head -> origin/gh/ezyang/3200/head 2025-12-04T10:14:41.2235144Z * [new branch] gh/ezyang/3200/orig -> origin/gh/ezyang/3200/orig 2025-12-04T10:14:41.2235212Z * [new branch] gh/ezyang/3201/base -> origin/gh/ezyang/3201/base 2025-12-04T10:14:41.2235284Z * [new branch] gh/ezyang/3201/head -> origin/gh/ezyang/3201/head 2025-12-04T10:14:41.2235351Z * [new branch] gh/ezyang/3201/orig -> origin/gh/ezyang/3201/orig 2025-12-04T10:14:41.2235419Z * [new branch] gh/ezyang/3202/base -> origin/gh/ezyang/3202/base 2025-12-04T10:14:41.2235490Z * [new branch] gh/ezyang/3202/head -> origin/gh/ezyang/3202/head 2025-12-04T10:14:41.2235596Z * [new branch] gh/ezyang/3202/orig -> origin/gh/ezyang/3202/orig 2025-12-04T10:14:41.2235664Z * [new branch] gh/ezyang/3203/base -> origin/gh/ezyang/3203/base 2025-12-04T10:14:41.2235734Z * [new branch] gh/ezyang/3203/head -> origin/gh/ezyang/3203/head 2025-12-04T10:14:41.2235802Z * [new branch] gh/ezyang/3203/orig -> origin/gh/ezyang/3203/orig 2025-12-04T10:14:41.2235870Z * [new branch] gh/ezyang/3204/base -> origin/gh/ezyang/3204/base 2025-12-04T10:14:41.2235942Z * [new branch] gh/ezyang/3204/head -> origin/gh/ezyang/3204/head 2025-12-04T10:14:41.2236010Z * [new branch] gh/ezyang/3204/orig -> origin/gh/ezyang/3204/orig 2025-12-04T10:14:41.2236078Z * [new branch] gh/ezyang/3205/base -> origin/gh/ezyang/3205/base 2025-12-04T10:14:41.2236150Z * [new branch] gh/ezyang/3205/head -> origin/gh/ezyang/3205/head 2025-12-04T10:14:41.2236224Z * [new branch] gh/ezyang/3205/orig -> origin/gh/ezyang/3205/orig 2025-12-04T10:14:41.2236294Z * [new branch] gh/ezyang/3206/base -> origin/gh/ezyang/3206/base 2025-12-04T10:14:41.2236363Z * [new branch] gh/ezyang/3206/head -> origin/gh/ezyang/3206/head 2025-12-04T10:14:41.2236431Z * [new branch] gh/ezyang/3206/orig -> origin/gh/ezyang/3206/orig 2025-12-04T10:14:41.2236501Z * [new branch] gh/ezyang/3207/base -> origin/gh/ezyang/3207/base 2025-12-04T10:14:41.2236569Z * [new branch] gh/ezyang/3207/head -> origin/gh/ezyang/3207/head 2025-12-04T10:14:41.2236641Z * [new branch] gh/ezyang/3207/orig -> origin/gh/ezyang/3207/orig 2025-12-04T10:14:41.2236714Z * [new branch] gh/ezyang/3208/base -> origin/gh/ezyang/3208/base 2025-12-04T10:14:41.2236783Z * [new branch] gh/ezyang/3208/head -> origin/gh/ezyang/3208/head 2025-12-04T10:14:41.2236853Z * [new branch] gh/ezyang/3208/orig -> origin/gh/ezyang/3208/orig 2025-12-04T10:14:41.2236922Z * [new branch] gh/ezyang/3209/base -> origin/gh/ezyang/3209/base 2025-12-04T10:14:41.2236989Z * [new branch] gh/ezyang/3209/head -> origin/gh/ezyang/3209/head 2025-12-04T10:14:41.2237057Z * [new branch] gh/ezyang/3209/orig -> origin/gh/ezyang/3209/orig 2025-12-04T10:14:41.2237134Z * [new branch] gh/fadara01/3/base -> origin/gh/fadara01/3/base 2025-12-04T10:14:41.2237233Z * [new branch] gh/fadara01/3/head -> origin/gh/fadara01/3/head 2025-12-04T10:14:41.2237302Z * [new branch] gh/fadara01/3/orig -> origin/gh/fadara01/3/orig 2025-12-04T10:14:41.2237373Z * [new branch] gh/fadara01/5/base -> origin/gh/fadara01/5/base 2025-12-04T10:14:41.2237441Z * [new branch] gh/fadara01/5/head -> origin/gh/fadara01/5/head 2025-12-04T10:14:41.2237511Z * [new branch] gh/fadara01/5/orig -> origin/gh/fadara01/5/orig 2025-12-04T10:14:41.2237581Z * [new branch] gh/fadara01/6/base -> origin/gh/fadara01/6/base 2025-12-04T10:14:41.2237650Z * [new branch] gh/fadara01/6/head -> origin/gh/fadara01/6/head 2025-12-04T10:14:41.2237718Z * [new branch] gh/fadara01/6/orig -> origin/gh/fadara01/6/orig 2025-12-04T10:14:41.2237791Z * [new branch] gh/fadara01/7/base -> origin/gh/fadara01/7/base 2025-12-04T10:14:41.2237859Z * [new branch] gh/fadara01/7/head -> origin/gh/fadara01/7/head 2025-12-04T10:14:41.2237929Z * [new branch] gh/fadara01/7/orig -> origin/gh/fadara01/7/orig 2025-12-04T10:14:41.2238001Z * [new branch] gh/fadara01/8/base -> origin/gh/fadara01/8/base 2025-12-04T10:14:41.2238068Z * [new branch] gh/fadara01/8/head -> origin/gh/fadara01/8/head 2025-12-04T10:14:41.2238174Z * [new branch] gh/fadara01/8/orig -> origin/gh/fadara01/8/orig 2025-12-04T10:14:41.2238244Z * [new branch] gh/fadara01/9/base -> origin/gh/fadara01/9/base 2025-12-04T10:14:41.2238312Z * [new branch] gh/fadara01/9/head -> origin/gh/fadara01/9/head 2025-12-04T10:14:41.2238386Z * [new branch] gh/fadara01/9/orig -> origin/gh/fadara01/9/orig 2025-12-04T10:14:41.2238455Z * [new branch] gh/fduwjj/182/base -> origin/gh/fduwjj/182/base 2025-12-04T10:14:41.2238524Z * [new branch] gh/fduwjj/182/head -> origin/gh/fduwjj/182/head 2025-12-04T10:14:41.2238597Z * [new branch] gh/fduwjj/182/orig -> origin/gh/fduwjj/182/orig 2025-12-04T10:14:41.2238665Z * [new branch] gh/fduwjj/211/base -> origin/gh/fduwjj/211/base 2025-12-04T10:14:41.2238735Z * [new branch] gh/fduwjj/211/head -> origin/gh/fduwjj/211/head 2025-12-04T10:14:41.2238812Z * [new branch] gh/fduwjj/211/orig -> origin/gh/fduwjj/211/orig 2025-12-04T10:14:41.2238881Z * [new branch] gh/fduwjj/212/base -> origin/gh/fduwjj/212/base 2025-12-04T10:14:41.2238949Z * [new branch] gh/fduwjj/212/head -> origin/gh/fduwjj/212/head 2025-12-04T10:14:41.2239019Z * [new branch] gh/fduwjj/212/orig -> origin/gh/fduwjj/212/orig 2025-12-04T10:14:41.2239088Z * [new branch] gh/fduwjj/213/base -> origin/gh/fduwjj/213/base 2025-12-04T10:14:41.2239156Z * [new branch] gh/fduwjj/213/head -> origin/gh/fduwjj/213/head 2025-12-04T10:14:41.2239233Z * [new branch] gh/fduwjj/213/orig -> origin/gh/fduwjj/213/orig 2025-12-04T10:14:41.2239303Z * [new branch] gh/fduwjj/226/base -> origin/gh/fduwjj/226/base 2025-12-04T10:14:41.2239372Z * [new branch] gh/fduwjj/226/head -> origin/gh/fduwjj/226/head 2025-12-04T10:14:41.2239444Z * [new branch] gh/fduwjj/226/orig -> origin/gh/fduwjj/226/orig 2025-12-04T10:14:41.2239513Z * [new branch] gh/fduwjj/229/base -> origin/gh/fduwjj/229/base 2025-12-04T10:14:41.2239582Z * [new branch] gh/fduwjj/229/head -> origin/gh/fduwjj/229/head 2025-12-04T10:14:41.2239653Z * [new branch] gh/fduwjj/229/orig -> origin/gh/fduwjj/229/orig 2025-12-04T10:14:41.2239720Z * [new branch] gh/fduwjj/233/base -> origin/gh/fduwjj/233/base 2025-12-04T10:14:41.2239789Z * [new branch] gh/fduwjj/233/head -> origin/gh/fduwjj/233/head 2025-12-04T10:14:41.2239886Z * [new branch] gh/fduwjj/233/orig -> origin/gh/fduwjj/233/orig 2025-12-04T10:14:41.2239953Z * [new branch] gh/fduwjj/234/base -> origin/gh/fduwjj/234/base 2025-12-04T10:14:41.2240021Z * [new branch] gh/fduwjj/234/head -> origin/gh/fduwjj/234/head 2025-12-04T10:14:41.2240090Z * [new branch] gh/fduwjj/234/orig -> origin/gh/fduwjj/234/orig 2025-12-04T10:14:41.2240158Z * [new branch] gh/fduwjj/235/base -> origin/gh/fduwjj/235/base 2025-12-04T10:14:41.2240229Z * [new branch] gh/fduwjj/235/head -> origin/gh/fduwjj/235/head 2025-12-04T10:14:41.2240296Z * [new branch] gh/fduwjj/235/orig -> origin/gh/fduwjj/235/orig 2025-12-04T10:14:41.2240363Z * [new branch] gh/fduwjj/236/base -> origin/gh/fduwjj/236/base 2025-12-04T10:14:41.2240432Z * [new branch] gh/fduwjj/236/head -> origin/gh/fduwjj/236/head 2025-12-04T10:14:41.2240501Z * [new branch] gh/fduwjj/236/orig -> origin/gh/fduwjj/236/orig 2025-12-04T10:14:41.2240568Z * [new branch] gh/fduwjj/237/base -> origin/gh/fduwjj/237/base 2025-12-04T10:14:41.2240673Z * [new branch] gh/fduwjj/237/head -> origin/gh/fduwjj/237/head 2025-12-04T10:14:41.2240792Z * [new branch] gh/fduwjj/237/orig -> origin/gh/fduwjj/237/orig 2025-12-04T10:14:41.2240861Z * [new branch] gh/fduwjj/238/base -> origin/gh/fduwjj/238/base 2025-12-04T10:14:41.2240931Z * [new branch] gh/fduwjj/238/head -> origin/gh/fduwjj/238/head 2025-12-04T10:14:41.2240998Z * [new branch] gh/fduwjj/238/orig -> origin/gh/fduwjj/238/orig 2025-12-04T10:14:41.2241069Z * [new branch] gh/fduwjj/239/base -> origin/gh/fduwjj/239/base 2025-12-04T10:14:41.2241139Z * [new branch] gh/fduwjj/239/head -> origin/gh/fduwjj/239/head 2025-12-04T10:14:41.2241208Z * [new branch] gh/fduwjj/239/orig -> origin/gh/fduwjj/239/orig 2025-12-04T10:14:41.2241280Z * [new branch] gh/fegin/332/base -> origin/gh/fegin/332/base 2025-12-04T10:14:41.2241348Z * [new branch] gh/fegin/332/head -> origin/gh/fegin/332/head 2025-12-04T10:14:41.2241418Z * [new branch] gh/fegin/332/orig -> origin/gh/fegin/332/orig 2025-12-04T10:14:41.2241487Z * [new branch] gh/fegin/333/base -> origin/gh/fegin/333/base 2025-12-04T10:14:41.2241553Z * [new branch] gh/fegin/333/head -> origin/gh/fegin/333/head 2025-12-04T10:14:41.2241619Z * [new branch] gh/fegin/333/orig -> origin/gh/fegin/333/orig 2025-12-04T10:14:41.2241685Z * [new branch] gh/fegin/334/base -> origin/gh/fegin/334/base 2025-12-04T10:14:41.2241750Z * [new branch] gh/fegin/334/head -> origin/gh/fegin/334/head 2025-12-04T10:14:41.2241815Z * [new branch] gh/fegin/334/orig -> origin/gh/fegin/334/orig 2025-12-04T10:14:41.2241883Z * [new branch] gh/fegin/335/base -> origin/gh/fegin/335/base 2025-12-04T10:14:41.2241949Z * [new branch] gh/fegin/335/head -> origin/gh/fegin/335/head 2025-12-04T10:14:41.2242016Z * [new branch] gh/fegin/335/orig -> origin/gh/fegin/335/orig 2025-12-04T10:14:41.2242086Z * [new branch] gh/fffrog/160/base -> origin/gh/fffrog/160/base 2025-12-04T10:14:41.2242154Z * [new branch] gh/fffrog/160/head -> origin/gh/fffrog/160/head 2025-12-04T10:14:41.2242220Z * [new branch] gh/fffrog/177/base -> origin/gh/fffrog/177/base 2025-12-04T10:14:41.2242289Z * [new branch] gh/fffrog/177/head -> origin/gh/fffrog/177/head 2025-12-04T10:14:41.2242355Z * [new branch] gh/fffrog/177/orig -> origin/gh/fffrog/177/orig 2025-12-04T10:14:41.2242460Z * [new branch] gh/fffrog/178/base -> origin/gh/fffrog/178/base 2025-12-04T10:14:41.2242528Z * [new branch] gh/fffrog/178/head -> origin/gh/fffrog/178/head 2025-12-04T10:14:41.2242596Z * [new branch] gh/fffrog/178/orig -> origin/gh/fffrog/178/orig 2025-12-04T10:14:41.2242663Z * [new branch] gh/fffrog/181/base -> origin/gh/fffrog/181/base 2025-12-04T10:14:41.2242731Z * [new branch] gh/fffrog/181/head -> origin/gh/fffrog/181/head 2025-12-04T10:14:41.2242799Z * [new branch] gh/fffrog/181/orig -> origin/gh/fffrog/181/orig 2025-12-04T10:14:41.2242868Z * [new branch] gh/fffrog/183/base -> origin/gh/fffrog/183/base 2025-12-04T10:14:41.2242936Z * [new branch] gh/fffrog/183/head -> origin/gh/fffrog/183/head 2025-12-04T10:14:41.2243002Z * [new branch] gh/fffrog/183/orig -> origin/gh/fffrog/183/orig 2025-12-04T10:14:41.2243073Z * [new branch] gh/fxdawnn/10/base -> origin/gh/fxdawnn/10/base 2025-12-04T10:14:41.2243140Z * [new branch] gh/fxdawnn/10/head -> origin/gh/fxdawnn/10/head 2025-12-04T10:14:41.2243207Z * [new branch] gh/fxdawnn/10/orig -> origin/gh/fxdawnn/10/orig 2025-12-04T10:14:41.2243276Z * [new branch] gh/fxdawnn/11/base -> origin/gh/fxdawnn/11/base 2025-12-04T10:14:41.2243378Z * [new branch] gh/fxdawnn/11/head -> origin/gh/fxdawnn/11/head 2025-12-04T10:14:41.2243446Z * [new branch] gh/fxdawnn/11/orig -> origin/gh/fxdawnn/11/orig 2025-12-04T10:14:41.2243514Z * [new branch] gh/fxdawnn/12/base -> origin/gh/fxdawnn/12/base 2025-12-04T10:14:41.2243581Z * [new branch] gh/fxdawnn/12/head -> origin/gh/fxdawnn/12/head 2025-12-04T10:14:41.2243649Z * [new branch] gh/fxdawnn/12/orig -> origin/gh/fxdawnn/12/orig 2025-12-04T10:14:41.2243725Z * [new branch] gh/fxdawnn/13/base -> origin/gh/fxdawnn/13/base 2025-12-04T10:14:41.2243794Z * [new branch] gh/fxdawnn/13/head -> origin/gh/fxdawnn/13/head 2025-12-04T10:14:41.2243864Z * [new branch] gh/fxdawnn/13/orig -> origin/gh/fxdawnn/13/orig 2025-12-04T10:14:41.2243938Z * [new branch] gh/fxdawnn/14/base -> origin/gh/fxdawnn/14/base 2025-12-04T10:14:41.2244006Z * [new branch] gh/fxdawnn/14/head -> origin/gh/fxdawnn/14/head 2025-12-04T10:14:41.2244075Z * [new branch] gh/fxdawnn/14/orig -> origin/gh/fxdawnn/14/orig 2025-12-04T10:14:41.2244143Z * [new branch] gh/fxdawnn/15/base -> origin/gh/fxdawnn/15/base 2025-12-04T10:14:41.2244210Z * [new branch] gh/fxdawnn/15/head -> origin/gh/fxdawnn/15/head 2025-12-04T10:14:41.2244278Z * [new branch] gh/fxdawnn/15/orig -> origin/gh/fxdawnn/15/orig 2025-12-04T10:14:41.2244349Z * [new branch] gh/fxdawnn/6/base -> origin/gh/fxdawnn/6/base 2025-12-04T10:14:41.2244417Z * [new branch] gh/fxdawnn/6/head -> origin/gh/fxdawnn/6/head 2025-12-04T10:14:41.2244486Z * [new branch] gh/fxdawnn/6/orig -> origin/gh/fxdawnn/6/orig 2025-12-04T10:14:41.2244553Z * [new branch] gh/fxdawnn/7/base -> origin/gh/fxdawnn/7/base 2025-12-04T10:14:41.2244620Z * [new branch] gh/fxdawnn/7/head -> origin/gh/fxdawnn/7/head 2025-12-04T10:14:41.2244689Z * [new branch] gh/fxdawnn/7/orig -> origin/gh/fxdawnn/7/orig 2025-12-04T10:14:41.2244755Z * [new branch] gh/fxdawnn/9/base -> origin/gh/fxdawnn/9/base 2025-12-04T10:14:41.2244821Z * [new branch] gh/fxdawnn/9/head -> origin/gh/fxdawnn/9/head 2025-12-04T10:14:41.2244889Z * [new branch] gh/fxdawnn/9/orig -> origin/gh/fxdawnn/9/orig 2025-12-04T10:14:41.2244987Z * [new branch] gh/galv/1/base -> origin/gh/galv/1/base 2025-12-04T10:14:41.2245053Z * [new branch] gh/galv/1/head -> origin/gh/galv/1/head 2025-12-04T10:14:41.2245118Z * [new branch] gh/galv/1/orig -> origin/gh/galv/1/orig 2025-12-04T10:14:41.2245181Z * [new branch] gh/galv/2/base -> origin/gh/galv/2/base 2025-12-04T10:14:41.2245245Z * [new branch] gh/galv/2/head -> origin/gh/galv/2/head 2025-12-04T10:14:41.2245309Z * [new branch] gh/galv/2/orig -> origin/gh/galv/2/orig 2025-12-04T10:14:41.2245372Z * [new branch] gh/galv/3/base -> origin/gh/galv/3/base 2025-12-04T10:14:41.2245436Z * [new branch] gh/galv/3/head -> origin/gh/galv/3/head 2025-12-04T10:14:41.2245500Z * [new branch] gh/galv/3/orig -> origin/gh/galv/3/orig 2025-12-04T10:14:41.2245577Z * [new branch] gh/guangyey/134/base -> origin/gh/guangyey/134/base 2025-12-04T10:14:41.2245651Z * [new branch] gh/guangyey/134/head -> origin/gh/guangyey/134/head 2025-12-04T10:14:41.2245725Z * [new branch] gh/guangyey/134/orig -> origin/gh/guangyey/134/orig 2025-12-04T10:14:41.2245797Z * [new branch] gh/guangyey/163/base -> origin/gh/guangyey/163/base 2025-12-04T10:14:41.2245892Z * [new branch] gh/guangyey/163/head -> origin/gh/guangyey/163/head 2025-12-04T10:14:41.2245965Z * [new branch] gh/guangyey/163/orig -> origin/gh/guangyey/163/orig 2025-12-04T10:14:41.2246035Z * [new branch] gh/guangyey/168/base -> origin/gh/guangyey/168/base 2025-12-04T10:14:41.2246107Z * [new branch] gh/guangyey/168/head -> origin/gh/guangyey/168/head 2025-12-04T10:14:41.2246176Z * [new branch] gh/guangyey/168/orig -> origin/gh/guangyey/168/orig 2025-12-04T10:14:41.2246247Z * [new branch] gh/guangyey/169/base -> origin/gh/guangyey/169/base 2025-12-04T10:14:41.2246318Z * [new branch] gh/guangyey/169/head -> origin/gh/guangyey/169/head 2025-12-04T10:14:41.2246389Z * [new branch] gh/guangyey/169/orig -> origin/gh/guangyey/169/orig 2025-12-04T10:14:41.2246458Z * [new branch] gh/guangyey/170/base -> origin/gh/guangyey/170/base 2025-12-04T10:14:41.2246533Z * [new branch] gh/guangyey/170/head -> origin/gh/guangyey/170/head 2025-12-04T10:14:41.2246604Z * [new branch] gh/guangyey/170/orig -> origin/gh/guangyey/170/orig 2025-12-04T10:14:41.2246674Z * [new branch] gh/guangyey/171/base -> origin/gh/guangyey/171/base 2025-12-04T10:14:41.2246744Z * [new branch] gh/guangyey/171/head -> origin/gh/guangyey/171/head 2025-12-04T10:14:41.2246814Z * [new branch] gh/guangyey/171/orig -> origin/gh/guangyey/171/orig 2025-12-04T10:14:41.2246884Z * [new branch] gh/guangyey/178/base -> origin/gh/guangyey/178/base 2025-12-04T10:14:41.2246959Z * [new branch] gh/guangyey/178/head -> origin/gh/guangyey/178/head 2025-12-04T10:14:41.2247030Z * [new branch] gh/guangyey/178/orig -> origin/gh/guangyey/178/orig 2025-12-04T10:14:41.2247099Z * [new branch] gh/guangyey/182/base -> origin/gh/guangyey/182/base 2025-12-04T10:14:41.2247171Z * [new branch] gh/guangyey/182/head -> origin/gh/guangyey/182/head 2025-12-04T10:14:41.2247240Z * [new branch] gh/guangyey/182/orig -> origin/gh/guangyey/182/orig 2025-12-04T10:14:41.2247310Z * [new branch] gh/guangyey/183/base -> origin/gh/guangyey/183/base 2025-12-04T10:14:41.2247380Z * [new branch] gh/guangyey/183/head -> origin/gh/guangyey/183/head 2025-12-04T10:14:41.2247451Z * [new branch] gh/guangyey/183/orig -> origin/gh/guangyey/183/orig 2025-12-04T10:14:41.2247582Z * [new branch] gh/guangyey/185/base -> origin/gh/guangyey/185/base 2025-12-04T10:14:41.2247653Z * [new branch] gh/guangyey/185/head -> origin/gh/guangyey/185/head 2025-12-04T10:14:41.2247723Z * [new branch] gh/guangyey/185/orig -> origin/gh/guangyey/185/orig 2025-12-04T10:14:41.2247796Z * [new branch] gh/guangyey/186/base -> origin/gh/guangyey/186/base 2025-12-04T10:14:41.2247867Z * [new branch] gh/guangyey/186/head -> origin/gh/guangyey/186/head 2025-12-04T10:14:41.2247937Z * [new branch] gh/guangyey/186/orig -> origin/gh/guangyey/186/orig 2025-12-04T10:14:41.2248008Z * [new branch] gh/guangyey/187/base -> origin/gh/guangyey/187/base 2025-12-04T10:14:41.2248078Z * [new branch] gh/guangyey/187/head -> origin/gh/guangyey/187/head 2025-12-04T10:14:41.2248150Z * [new branch] gh/guangyey/187/orig -> origin/gh/guangyey/187/orig 2025-12-04T10:14:41.2248229Z * [new branch] gh/guangyey/188/base -> origin/gh/guangyey/188/base 2025-12-04T10:14:41.2248299Z * [new branch] gh/guangyey/188/head -> origin/gh/guangyey/188/head 2025-12-04T10:14:41.2248369Z * [new branch] gh/guangyey/188/orig -> origin/gh/guangyey/188/orig 2025-12-04T10:14:41.2248440Z * [new branch] gh/guangyey/190/base -> origin/gh/guangyey/190/base 2025-12-04T10:14:41.2248540Z * [new branch] gh/guangyey/190/head -> origin/gh/guangyey/190/head 2025-12-04T10:14:41.2248610Z * [new branch] gh/guangyey/190/orig -> origin/gh/guangyey/190/orig 2025-12-04T10:14:41.2248681Z * [new branch] gh/guangyey/208/base -> origin/gh/guangyey/208/base 2025-12-04T10:14:41.2248751Z * [new branch] gh/guangyey/208/head -> origin/gh/guangyey/208/head 2025-12-04T10:14:41.2248823Z * [new branch] gh/guangyey/208/orig -> origin/gh/guangyey/208/orig 2025-12-04T10:14:41.2248897Z * [new branch] gh/guangyey/228/base -> origin/gh/guangyey/228/base 2025-12-04T10:14:41.2248967Z * [new branch] gh/guangyey/228/head -> origin/gh/guangyey/228/head 2025-12-04T10:14:41.2249036Z * [new branch] gh/guangyey/228/orig -> origin/gh/guangyey/228/orig 2025-12-04T10:14:41.2249108Z * [new branch] gh/guangyey/230/base -> origin/gh/guangyey/230/base 2025-12-04T10:14:41.2249178Z * [new branch] gh/guangyey/230/head -> origin/gh/guangyey/230/head 2025-12-04T10:14:41.2249249Z * [new branch] gh/guangyey/230/orig -> origin/gh/guangyey/230/orig 2025-12-04T10:14:41.2249319Z * [new branch] gh/guangyey/231/base -> origin/gh/guangyey/231/base 2025-12-04T10:14:41.2249388Z * [new branch] gh/guangyey/231/head -> origin/gh/guangyey/231/head 2025-12-04T10:14:41.2249460Z * [new branch] gh/guangyey/231/orig -> origin/gh/guangyey/231/orig 2025-12-04T10:14:41.2249532Z * [new branch] gh/guangyey/232/base -> origin/gh/guangyey/232/base 2025-12-04T10:14:41.2249600Z * [new branch] gh/guangyey/232/head -> origin/gh/guangyey/232/head 2025-12-04T10:14:41.2249672Z * [new branch] gh/guangyey/232/orig -> origin/gh/guangyey/232/orig 2025-12-04T10:14:41.2249742Z * [new branch] gh/guangyey/233/base -> origin/gh/guangyey/233/base 2025-12-04T10:14:41.2249813Z * [new branch] gh/guangyey/233/head -> origin/gh/guangyey/233/head 2025-12-04T10:14:41.2249884Z * [new branch] gh/guangyey/233/orig -> origin/gh/guangyey/233/orig 2025-12-04T10:14:41.2249955Z * [new branch] gh/guangyey/234/base -> origin/gh/guangyey/234/base 2025-12-04T10:14:41.2250025Z * [new branch] gh/guangyey/234/head -> origin/gh/guangyey/234/head 2025-12-04T10:14:41.2250100Z * [new branch] gh/guangyey/234/orig -> origin/gh/guangyey/234/orig 2025-12-04T10:14:41.2250203Z * [new branch] gh/guangyey/235/base -> origin/gh/guangyey/235/base 2025-12-04T10:14:41.2250273Z * [new branch] gh/guangyey/235/head -> origin/gh/guangyey/235/head 2025-12-04T10:14:41.2250345Z * [new branch] gh/guangyey/235/orig -> origin/gh/guangyey/235/orig 2025-12-04T10:14:41.2250416Z * [new branch] gh/guangyey/236/base -> origin/gh/guangyey/236/base 2025-12-04T10:14:41.2250485Z * [new branch] gh/guangyey/236/head -> origin/gh/guangyey/236/head 2025-12-04T10:14:41.2250556Z * [new branch] gh/guangyey/236/orig -> origin/gh/guangyey/236/orig 2025-12-04T10:14:41.2250675Z * [new branch] gh/guangyey/237/base -> origin/gh/guangyey/237/base 2025-12-04T10:14:41.2250747Z * [new branch] gh/guangyey/237/head -> origin/gh/guangyey/237/head 2025-12-04T10:14:41.2250816Z * [new branch] gh/guangyey/237/orig -> origin/gh/guangyey/237/orig 2025-12-04T10:14:41.2250888Z * [new branch] gh/guangyey/238/base -> origin/gh/guangyey/238/base 2025-12-04T10:14:41.2250959Z * [new branch] gh/guangyey/238/head -> origin/gh/guangyey/238/head 2025-12-04T10:14:41.2251028Z * [new branch] gh/guangyey/239/base -> origin/gh/guangyey/239/base 2025-12-04T10:14:41.2251172Z * [new branch] gh/guangyey/239/head -> origin/gh/guangyey/239/head 2025-12-04T10:14:41.2251242Z * [new branch] gh/guangyey/239/orig -> origin/gh/guangyey/239/orig 2025-12-04T10:14:41.2251313Z * [new branch] gh/guangyey/240/base -> origin/gh/guangyey/240/base 2025-12-04T10:14:41.2251383Z * [new branch] gh/guangyey/240/head -> origin/gh/guangyey/240/head 2025-12-04T10:14:41.2251452Z * [new branch] gh/guangyey/240/orig -> origin/gh/guangyey/240/orig 2025-12-04T10:14:41.2251522Z * [new branch] gh/guangyey/241/base -> origin/gh/guangyey/241/base 2025-12-04T10:14:41.2251592Z * [new branch] gh/guangyey/241/head -> origin/gh/guangyey/241/head 2025-12-04T10:14:41.2251663Z * [new branch] gh/guangyey/241/orig -> origin/gh/guangyey/241/orig 2025-12-04T10:14:41.2251732Z * [new branch] gh/guangyey/242/base -> origin/gh/guangyey/242/base 2025-12-04T10:14:41.2251802Z * [new branch] gh/guangyey/242/head -> origin/gh/guangyey/242/head 2025-12-04T10:14:41.2251875Z * [new branch] gh/guangyey/242/orig -> origin/gh/guangyey/242/orig 2025-12-04T10:14:41.2251944Z * [new branch] gh/guangyey/243/base -> origin/gh/guangyey/243/base 2025-12-04T10:14:41.2252013Z * [new branch] gh/guangyey/243/head -> origin/gh/guangyey/243/head 2025-12-04T10:14:41.2252086Z * [new branch] gh/guangyey/243/orig -> origin/gh/guangyey/243/orig 2025-12-04T10:14:41.2252157Z * [new branch] gh/guangyey/244/base -> origin/gh/guangyey/244/base 2025-12-04T10:14:41.2252228Z * [new branch] gh/guangyey/244/head -> origin/gh/guangyey/244/head 2025-12-04T10:14:41.2252299Z * [new branch] gh/guangyey/244/orig -> origin/gh/guangyey/244/orig 2025-12-04T10:14:41.2252371Z * [new branch] gh/guangyey/245/base -> origin/gh/guangyey/245/base 2025-12-04T10:14:41.2252444Z * [new branch] gh/guangyey/245/head -> origin/gh/guangyey/245/head 2025-12-04T10:14:41.2252513Z * [new branch] gh/guangyey/245/orig -> origin/gh/guangyey/245/orig 2025-12-04T10:14:41.2252582Z * [new branch] gh/guangyey/246/base -> origin/gh/guangyey/246/base 2025-12-04T10:14:41.2252654Z * [new branch] gh/guangyey/246/head -> origin/gh/guangyey/246/head 2025-12-04T10:14:41.2252724Z * [new branch] gh/guangyey/246/orig -> origin/gh/guangyey/246/orig 2025-12-04T10:14:41.2252793Z * [new branch] gh/guangyey/247/base -> origin/gh/guangyey/247/base 2025-12-04T10:14:41.2252913Z * [new branch] gh/guangyey/247/head -> origin/gh/guangyey/247/head 2025-12-04T10:14:41.2252983Z * [new branch] gh/guangyey/247/orig -> origin/gh/guangyey/247/orig 2025-12-04T10:14:41.2253052Z * [new branch] gh/guangyey/248/base -> origin/gh/guangyey/248/base 2025-12-04T10:14:41.2253124Z * [new branch] gh/guangyey/248/head -> origin/gh/guangyey/248/head 2025-12-04T10:14:41.2253194Z * [new branch] gh/guangyey/248/orig -> origin/gh/guangyey/248/orig 2025-12-04T10:14:41.2253263Z * [new branch] gh/guangyey/249/base -> origin/gh/guangyey/249/base 2025-12-04T10:14:41.2253334Z * [new branch] gh/guangyey/249/head -> origin/gh/guangyey/249/head 2025-12-04T10:14:41.2253403Z * [new branch] gh/guangyey/249/orig -> origin/gh/guangyey/249/orig 2025-12-04T10:14:41.2253472Z * [new branch] gh/guangyey/250/base -> origin/gh/guangyey/250/base 2025-12-04T10:14:41.2253545Z * [new branch] gh/guangyey/250/head -> origin/gh/guangyey/250/head 2025-12-04T10:14:41.2253614Z * [new branch] gh/guangyey/250/orig -> origin/gh/guangyey/250/orig 2025-12-04T10:14:41.2253685Z * [new branch] gh/guangyey/251/base -> origin/gh/guangyey/251/base 2025-12-04T10:14:41.2253787Z * [new branch] gh/guangyey/251/head -> origin/gh/guangyey/251/head 2025-12-04T10:14:41.2253857Z * [new branch] gh/guangyey/251/orig -> origin/gh/guangyey/251/orig 2025-12-04T10:14:41.2253928Z * [new branch] gh/guangyey/252/base -> origin/gh/guangyey/252/base 2025-12-04T10:14:41.2253997Z * [new branch] gh/guangyey/252/head -> origin/gh/guangyey/252/head 2025-12-04T10:14:41.2254066Z * [new branch] gh/guangyey/252/orig -> origin/gh/guangyey/252/orig 2025-12-04T10:14:41.2254137Z * [new branch] gh/guangyey/253/base -> origin/gh/guangyey/253/base 2025-12-04T10:14:41.2254210Z * [new branch] gh/guangyey/253/head -> origin/gh/guangyey/253/head 2025-12-04T10:14:41.2254279Z * [new branch] gh/guangyey/253/orig -> origin/gh/guangyey/253/orig 2025-12-04T10:14:41.2254349Z * [new branch] gh/guangyey/254/base -> origin/gh/guangyey/254/base 2025-12-04T10:14:41.2254420Z * [new branch] gh/guangyey/254/head -> origin/gh/guangyey/254/head 2025-12-04T10:14:41.2254490Z * [new branch] gh/guangyey/254/orig -> origin/gh/guangyey/254/orig 2025-12-04T10:14:41.2254561Z * [new branch] gh/guangyey/255/base -> origin/gh/guangyey/255/base 2025-12-04T10:14:41.2254631Z * [new branch] gh/guangyey/255/head -> origin/gh/guangyey/255/head 2025-12-04T10:14:41.2254700Z * [new branch] gh/guangyey/255/orig -> origin/gh/guangyey/255/orig 2025-12-04T10:14:41.2254800Z * [new branch] gh/guilhermeleobas/107/base -> origin/gh/guilhermeleobas/107/base 2025-12-04T10:14:41.2254892Z * [new branch] gh/guilhermeleobas/107/head -> origin/gh/guilhermeleobas/107/head 2025-12-04T10:14:41.2254980Z * [new branch] gh/guilhermeleobas/107/orig -> origin/gh/guilhermeleobas/107/orig 2025-12-04T10:14:41.2255069Z * [new branch] gh/guilhermeleobas/108/base -> origin/gh/guilhermeleobas/108/base 2025-12-04T10:14:41.2255157Z * [new branch] gh/guilhermeleobas/108/head -> origin/gh/guilhermeleobas/108/head 2025-12-04T10:14:41.2255247Z * [new branch] gh/guilhermeleobas/108/orig -> origin/gh/guilhermeleobas/108/orig 2025-12-04T10:14:41.2255334Z * [new branch] gh/guilhermeleobas/150/base -> origin/gh/guilhermeleobas/150/base 2025-12-04T10:14:41.2255419Z * [new branch] gh/guilhermeleobas/150/head -> origin/gh/guilhermeleobas/150/head 2025-12-04T10:14:41.2255506Z * [new branch] gh/guilhermeleobas/150/orig -> origin/gh/guilhermeleobas/150/orig 2025-12-04T10:14:41.2255622Z * [new branch] gh/guilhermeleobas/168/base -> origin/gh/guilhermeleobas/168/base 2025-12-04T10:14:41.2255709Z * [new branch] gh/guilhermeleobas/168/head -> origin/gh/guilhermeleobas/168/head 2025-12-04T10:14:41.2255796Z * [new branch] gh/guilhermeleobas/168/orig -> origin/gh/guilhermeleobas/168/orig 2025-12-04T10:14:41.2255884Z * [new branch] gh/guilhermeleobas/169/base -> origin/gh/guilhermeleobas/169/base 2025-12-04T10:14:41.2255971Z * [new branch] gh/guilhermeleobas/169/head -> origin/gh/guilhermeleobas/169/head 2025-12-04T10:14:41.2256059Z * [new branch] gh/guilhermeleobas/169/orig -> origin/gh/guilhermeleobas/169/orig 2025-12-04T10:14:41.2256145Z * [new branch] gh/guilhermeleobas/170/base -> origin/gh/guilhermeleobas/170/base 2025-12-04T10:14:41.2256231Z * [new branch] gh/guilhermeleobas/170/head -> origin/gh/guilhermeleobas/170/head 2025-12-04T10:14:41.2256320Z * [new branch] gh/guilhermeleobas/170/orig -> origin/gh/guilhermeleobas/170/orig 2025-12-04T10:14:41.2256406Z * [new branch] gh/guilhermeleobas/171/base -> origin/gh/guilhermeleobas/171/base 2025-12-04T10:14:41.2256492Z * [new branch] gh/guilhermeleobas/171/head -> origin/gh/guilhermeleobas/171/head 2025-12-04T10:14:41.2256604Z * [new branch] gh/guilhermeleobas/171/orig -> origin/gh/guilhermeleobas/171/orig 2025-12-04T10:14:41.2256691Z * [new branch] gh/guilhermeleobas/173/base -> origin/gh/guilhermeleobas/173/base 2025-12-04T10:14:41.2256779Z * [new branch] gh/guilhermeleobas/173/head -> origin/gh/guilhermeleobas/173/head 2025-12-04T10:14:41.2256865Z * [new branch] gh/guilhermeleobas/173/orig -> origin/gh/guilhermeleobas/173/orig 2025-12-04T10:14:41.2256950Z * [new branch] gh/guilhermeleobas/193/base -> origin/gh/guilhermeleobas/193/base 2025-12-04T10:14:41.2257039Z * [new branch] gh/guilhermeleobas/193/head -> origin/gh/guilhermeleobas/193/head 2025-12-04T10:14:41.2257126Z * [new branch] gh/guilhermeleobas/193/orig -> origin/gh/guilhermeleobas/193/orig 2025-12-04T10:14:41.2257212Z * [new branch] gh/guilhermeleobas/204/base -> origin/gh/guilhermeleobas/204/base 2025-12-04T10:14:41.2257302Z * [new branch] gh/guilhermeleobas/204/head -> origin/gh/guilhermeleobas/204/head 2025-12-04T10:14:41.2257388Z * [new branch] gh/guilhermeleobas/204/orig -> origin/gh/guilhermeleobas/204/orig 2025-12-04T10:14:41.2257473Z * [new branch] gh/guilhermeleobas/211/base -> origin/gh/guilhermeleobas/211/base 2025-12-04T10:14:41.2257560Z * [new branch] gh/guilhermeleobas/211/head -> origin/gh/guilhermeleobas/211/head 2025-12-04T10:14:41.2257647Z * [new branch] gh/guilhermeleobas/211/orig -> origin/gh/guilhermeleobas/211/orig 2025-12-04T10:14:41.2257734Z * [new branch] gh/guilhermeleobas/226/base -> origin/gh/guilhermeleobas/226/base 2025-12-04T10:14:41.2257821Z * [new branch] gh/guilhermeleobas/226/head -> origin/gh/guilhermeleobas/226/head 2025-12-04T10:14:41.2257908Z * [new branch] gh/guilhermeleobas/226/orig -> origin/gh/guilhermeleobas/226/orig 2025-12-04T10:14:41.2257994Z * [new branch] gh/guilhermeleobas/236/base -> origin/gh/guilhermeleobas/236/base 2025-12-04T10:14:41.2258081Z * [new branch] gh/guilhermeleobas/236/head -> origin/gh/guilhermeleobas/236/head 2025-12-04T10:14:41.2258167Z * [new branch] gh/guilhermeleobas/236/orig -> origin/gh/guilhermeleobas/236/orig 2025-12-04T10:14:41.2258254Z * [new branch] gh/guilhermeleobas/247/base -> origin/gh/guilhermeleobas/247/base 2025-12-04T10:14:41.2258339Z * [new branch] gh/guilhermeleobas/247/head -> origin/gh/guilhermeleobas/247/head 2025-12-04T10:14:41.2258451Z * [new branch] gh/guilhermeleobas/247/orig -> origin/gh/guilhermeleobas/247/orig 2025-12-04T10:14:41.2258537Z * [new branch] gh/guilhermeleobas/248/base -> origin/gh/guilhermeleobas/248/base 2025-12-04T10:14:41.2258622Z * [new branch] gh/guilhermeleobas/248/head -> origin/gh/guilhermeleobas/248/head 2025-12-04T10:14:41.2258710Z * [new branch] gh/guilhermeleobas/248/orig -> origin/gh/guilhermeleobas/248/orig 2025-12-04T10:14:41.2258798Z * [new branch] gh/guilhermeleobas/250/base -> origin/gh/guilhermeleobas/250/base 2025-12-04T10:14:41.2258883Z * [new branch] gh/guilhermeleobas/250/head -> origin/gh/guilhermeleobas/250/head 2025-12-04T10:14:41.2258969Z * [new branch] gh/guilhermeleobas/250/orig -> origin/gh/guilhermeleobas/250/orig 2025-12-04T10:14:41.2259056Z * [new branch] gh/guilhermeleobas/253/base -> origin/gh/guilhermeleobas/253/base 2025-12-04T10:14:41.2259143Z * [new branch] gh/guilhermeleobas/253/head -> origin/gh/guilhermeleobas/253/head 2025-12-04T10:14:41.2259229Z * [new branch] gh/guilhermeleobas/253/orig -> origin/gh/guilhermeleobas/253/orig 2025-12-04T10:14:41.2259317Z * [new branch] gh/guilhermeleobas/254/base -> origin/gh/guilhermeleobas/254/base 2025-12-04T10:14:41.2259424Z * [new branch] gh/guilhermeleobas/254/head -> origin/gh/guilhermeleobas/254/head 2025-12-04T10:14:41.2259510Z * [new branch] gh/guilhermeleobas/254/orig -> origin/gh/guilhermeleobas/254/orig 2025-12-04T10:14:41.2259596Z * [new branch] gh/guilhermeleobas/255/base -> origin/gh/guilhermeleobas/255/base 2025-12-04T10:14:41.2259681Z * [new branch] gh/guilhermeleobas/255/head -> origin/gh/guilhermeleobas/255/head 2025-12-04T10:14:41.2259769Z * [new branch] gh/guilhermeleobas/255/orig -> origin/gh/guilhermeleobas/255/orig 2025-12-04T10:14:41.2259857Z * [new branch] gh/guilhermeleobas/256/base -> origin/gh/guilhermeleobas/256/base 2025-12-04T10:14:41.2259942Z * [new branch] gh/guilhermeleobas/256/head -> origin/gh/guilhermeleobas/256/head 2025-12-04T10:14:41.2260028Z * [new branch] gh/guilhermeleobas/256/orig -> origin/gh/guilhermeleobas/256/orig 2025-12-04T10:14:41.2260114Z * [new branch] gh/guilhermeleobas/257/base -> origin/gh/guilhermeleobas/257/base 2025-12-04T10:14:41.2260200Z * [new branch] gh/guilhermeleobas/257/head -> origin/gh/guilhermeleobas/257/head 2025-12-04T10:14:41.2260285Z * [new branch] gh/guilhermeleobas/257/orig -> origin/gh/guilhermeleobas/257/orig 2025-12-04T10:14:41.2260370Z * [new branch] gh/guilhermeleobas/258/base -> origin/gh/guilhermeleobas/258/base 2025-12-04T10:14:41.2260456Z * [new branch] gh/guilhermeleobas/258/head -> origin/gh/guilhermeleobas/258/head 2025-12-04T10:14:41.2260547Z * [new branch] gh/guilhermeleobas/258/orig -> origin/gh/guilhermeleobas/258/orig 2025-12-04T10:14:41.2260669Z * [new branch] gh/guilhermeleobas/259/base -> origin/gh/guilhermeleobas/259/base 2025-12-04T10:14:41.2260759Z * [new branch] gh/guilhermeleobas/259/head -> origin/gh/guilhermeleobas/259/head 2025-12-04T10:14:41.2260847Z * [new branch] gh/guilhermeleobas/259/orig -> origin/gh/guilhermeleobas/259/orig 2025-12-04T10:14:41.2260933Z * [new branch] gh/guilhermeleobas/260/base -> origin/gh/guilhermeleobas/260/base 2025-12-04T10:14:41.2261019Z * [new branch] gh/guilhermeleobas/260/head -> origin/gh/guilhermeleobas/260/head 2025-12-04T10:14:41.2261106Z * [new branch] gh/guilhermeleobas/260/orig -> origin/gh/guilhermeleobas/260/orig 2025-12-04T10:14:41.2261193Z * [new branch] gh/guilhermeleobas/261/base -> origin/gh/guilhermeleobas/261/base 2025-12-04T10:14:41.2261279Z * [new branch] gh/guilhermeleobas/261/head -> origin/gh/guilhermeleobas/261/head 2025-12-04T10:14:41.2261412Z * [new branch] gh/guilhermeleobas/261/orig -> origin/gh/guilhermeleobas/261/orig 2025-12-04T10:14:41.2261498Z * [new branch] gh/guilhermeleobas/262/base -> origin/gh/guilhermeleobas/262/base 2025-12-04T10:14:41.2261583Z * [new branch] gh/guilhermeleobas/262/head -> origin/gh/guilhermeleobas/262/head 2025-12-04T10:14:41.2261672Z * [new branch] gh/guilhermeleobas/262/orig -> origin/gh/guilhermeleobas/262/orig 2025-12-04T10:14:41.2261757Z * [new branch] gh/guilhermeleobas/263/base -> origin/gh/guilhermeleobas/263/base 2025-12-04T10:14:41.2261844Z * [new branch] gh/guilhermeleobas/263/head -> origin/gh/guilhermeleobas/263/head 2025-12-04T10:14:41.2261930Z * [new branch] gh/guilhermeleobas/263/orig -> origin/gh/guilhermeleobas/263/orig 2025-12-04T10:14:41.2262015Z * [new branch] gh/guilhermeleobas/264/base -> origin/gh/guilhermeleobas/264/base 2025-12-04T10:14:41.2262103Z * [new branch] gh/guilhermeleobas/264/head -> origin/gh/guilhermeleobas/264/head 2025-12-04T10:14:41.2262190Z * [new branch] gh/guilhermeleobas/264/orig -> origin/gh/guilhermeleobas/264/orig 2025-12-04T10:14:41.2262276Z * [new branch] gh/guilhermeleobas/265/base -> origin/gh/guilhermeleobas/265/base 2025-12-04T10:14:41.2262402Z * [new branch] gh/guilhermeleobas/265/head -> origin/gh/guilhermeleobas/265/head 2025-12-04T10:14:41.2262488Z * [new branch] gh/guilhermeleobas/265/orig -> origin/gh/guilhermeleobas/265/orig 2025-12-04T10:14:41.2262573Z * [new branch] gh/guilhermeleobas/266/base -> origin/gh/guilhermeleobas/266/base 2025-12-04T10:14:41.2262659Z * [new branch] gh/guilhermeleobas/266/head -> origin/gh/guilhermeleobas/266/head 2025-12-04T10:14:41.2262744Z * [new branch] gh/guilhermeleobas/266/orig -> origin/gh/guilhermeleobas/266/orig 2025-12-04T10:14:41.2262831Z * [new branch] gh/guilhermeleobas/267/base -> origin/gh/guilhermeleobas/267/base 2025-12-04T10:14:41.2262918Z * [new branch] gh/guilhermeleobas/267/head -> origin/gh/guilhermeleobas/267/head 2025-12-04T10:14:41.2263004Z * [new branch] gh/guilhermeleobas/267/orig -> origin/gh/guilhermeleobas/267/orig 2025-12-04T10:14:41.2263086Z * [new branch] gh/hameerabbasi/1/base -> origin/gh/hameerabbasi/1/base 2025-12-04T10:14:41.2263165Z * [new branch] gh/hameerabbasi/1/head -> origin/gh/hameerabbasi/1/head 2025-12-04T10:14:41.2263239Z * [new branch] gh/hameerabbasi/2/base -> origin/gh/hameerabbasi/2/base 2025-12-04T10:14:41.2263315Z * [new branch] gh/hameerabbasi/2/head -> origin/gh/hameerabbasi/2/head 2025-12-04T10:14:41.2263389Z * [new branch] gh/hameerabbasi/2/orig -> origin/gh/hameerabbasi/2/orig 2025-12-04T10:14:41.2263465Z * [new branch] gh/hameerabbasi/3/base -> origin/gh/hameerabbasi/3/base 2025-12-04T10:14:41.2263539Z * [new branch] gh/hameerabbasi/3/head -> origin/gh/hameerabbasi/3/head 2025-12-04T10:14:41.2263612Z * [new branch] gh/hameerabbasi/3/orig -> origin/gh/hameerabbasi/3/orig 2025-12-04T10:14:41.2263686Z * [new branch] gh/hameerabbasi/4/base -> origin/gh/hameerabbasi/4/base 2025-12-04T10:14:41.2263763Z * [new branch] gh/hameerabbasi/4/head -> origin/gh/hameerabbasi/4/head 2025-12-04T10:14:41.2263837Z * [new branch] gh/hameerabbasi/4/orig -> origin/gh/hameerabbasi/4/orig 2025-12-04T10:14:41.2263906Z * [new branch] gh/huydhn/1/next -> origin/gh/huydhn/1/next 2025-12-04T10:14:41.2263976Z * [new branch] gh/huydhn/2/next -> origin/gh/huydhn/2/next 2025-12-04T10:14:41.2264042Z * [new branch] gh/huydhn/3/next -> origin/gh/huydhn/3/next 2025-12-04T10:14:41.2264140Z * [new branch] gh/huydhn/4/next -> origin/gh/huydhn/4/next 2025-12-04T10:14:41.2264207Z * [new branch] gh/huydhn/5/next -> origin/gh/huydhn/5/next 2025-12-04T10:14:41.2264271Z * [new branch] gh/huydhn/6/next -> origin/gh/huydhn/6/next 2025-12-04T10:14:41.2264339Z * [new branch] gh/int3/97/base -> origin/gh/int3/97/base 2025-12-04T10:14:41.2264407Z * [new branch] gh/int3/97/head -> origin/gh/int3/97/head 2025-12-04T10:14:41.2264477Z * [new branch] gh/isuruf/101/base -> origin/gh/isuruf/101/base 2025-12-04T10:14:41.2264546Z * [new branch] gh/isuruf/101/head -> origin/gh/isuruf/101/head 2025-12-04T10:14:41.2264616Z * [new branch] gh/isuruf/146/base -> origin/gh/isuruf/146/base 2025-12-04T10:14:41.2264682Z * [new branch] gh/isuruf/146/head -> origin/gh/isuruf/146/head 2025-12-04T10:14:41.2264749Z * [new branch] gh/isuruf/146/orig -> origin/gh/isuruf/146/orig 2025-12-04T10:14:41.2264816Z * [new branch] gh/isuruf/158/base -> origin/gh/isuruf/158/base 2025-12-04T10:14:41.2264882Z * [new branch] gh/isuruf/158/head -> origin/gh/isuruf/158/head 2025-12-04T10:14:41.2264949Z * [new branch] gh/isuruf/159/base -> origin/gh/isuruf/159/base 2025-12-04T10:14:41.2265040Z * [new branch] gh/isuruf/159/head -> origin/gh/isuruf/159/head 2025-12-04T10:14:41.2265106Z * [new branch] gh/isuruf/160/base -> origin/gh/isuruf/160/base 2025-12-04T10:14:41.2265175Z * [new branch] gh/isuruf/160/head -> origin/gh/isuruf/160/head 2025-12-04T10:14:41.2265241Z * [new branch] gh/isuruf/160/orig -> origin/gh/isuruf/160/orig 2025-12-04T10:14:41.2265308Z * [new branch] gh/isuruf/81/base -> origin/gh/isuruf/81/base 2025-12-04T10:14:41.2265378Z * [new branch] gh/isuruf/81/head -> origin/gh/isuruf/81/head 2025-12-04T10:14:41.2265444Z * [new branch] gh/isuruf/81/orig -> origin/gh/isuruf/81/orig 2025-12-04T10:14:41.2265517Z * [new branch] gh/jamesjwu/176/base -> origin/gh/jamesjwu/176/base 2025-12-04T10:14:41.2265590Z * [new branch] gh/jamesjwu/176/head -> origin/gh/jamesjwu/176/head 2025-12-04T10:14:41.2265661Z * [new branch] gh/jamesjwu/176/orig -> origin/gh/jamesjwu/176/orig 2025-12-04T10:14:41.2265731Z * [new branch] gh/jamesjwu/187/base -> origin/gh/jamesjwu/187/base 2025-12-04T10:14:41.2265802Z * [new branch] gh/jamesjwu/187/head -> origin/gh/jamesjwu/187/head 2025-12-04T10:14:41.2265872Z * [new branch] gh/jamesjwu/187/orig -> origin/gh/jamesjwu/187/orig 2025-12-04T10:14:41.2265941Z * [new branch] gh/jamesjwu/196/base -> origin/gh/jamesjwu/196/base 2025-12-04T10:14:41.2266012Z * [new branch] gh/jamesjwu/196/head -> origin/gh/jamesjwu/196/head 2025-12-04T10:14:41.2266081Z * [new branch] gh/jamesjwu/196/orig -> origin/gh/jamesjwu/196/orig 2025-12-04T10:14:41.2266150Z * [new branch] gh/jamesjwu/198/base -> origin/gh/jamesjwu/198/base 2025-12-04T10:14:41.2266220Z * [new branch] gh/jamesjwu/198/head -> origin/gh/jamesjwu/198/head 2025-12-04T10:14:41.2266290Z * [new branch] gh/jamesjwu/198/orig -> origin/gh/jamesjwu/198/orig 2025-12-04T10:14:41.2266359Z * [new branch] gh/jamesjwu/207/base -> origin/gh/jamesjwu/207/base 2025-12-04T10:14:41.2266429Z * [new branch] gh/jamesjwu/207/head -> origin/gh/jamesjwu/207/head 2025-12-04T10:14:41.2266498Z * [new branch] gh/jamesjwu/207/orig -> origin/gh/jamesjwu/207/orig 2025-12-04T10:14:41.2266569Z * [new branch] gh/jamesjwu/208/base -> origin/gh/jamesjwu/208/base 2025-12-04T10:14:41.2266667Z * [new branch] gh/jamesjwu/208/head -> origin/gh/jamesjwu/208/head 2025-12-04T10:14:41.2266737Z * [new branch] gh/jamesjwu/208/orig -> origin/gh/jamesjwu/208/orig 2025-12-04T10:14:41.2266808Z * [new branch] gh/jamesjwu/52/base -> origin/gh/jamesjwu/52/base 2025-12-04T10:14:41.2266879Z * [new branch] gh/jamesjwu/52/head -> origin/gh/jamesjwu/52/head 2025-12-04T10:14:41.2266951Z * [new branch] gh/jamesjwu/53/base -> origin/gh/jamesjwu/53/base 2025-12-04T10:14:41.2267023Z * [new branch] gh/jamesjwu/53/head -> origin/gh/jamesjwu/53/head 2025-12-04T10:14:41.2267092Z * [new branch] gh/jamesjwu/54/base -> origin/gh/jamesjwu/54/base 2025-12-04T10:14:41.2267160Z * [new branch] gh/jamesjwu/54/head -> origin/gh/jamesjwu/54/head 2025-12-04T10:14:41.2267230Z * [new branch] gh/jamesjwu/55/base -> origin/gh/jamesjwu/55/base 2025-12-04T10:14:41.2267300Z * [new branch] gh/jamesjwu/55/head -> origin/gh/jamesjwu/55/head 2025-12-04T10:14:41.2267370Z * [new branch] gh/jamesjwu/56/base -> origin/gh/jamesjwu/56/base 2025-12-04T10:14:41.2267440Z * [new branch] gh/jamesjwu/56/head -> origin/gh/jamesjwu/56/head 2025-12-04T10:14:41.2267509Z * [new branch] gh/jamesjwu/57/base -> origin/gh/jamesjwu/57/base 2025-12-04T10:14:41.2267604Z * [new branch] gh/jamesjwu/57/head -> origin/gh/jamesjwu/57/head 2025-12-04T10:14:41.2267675Z * [new branch] gh/jamesjwu/58/base -> origin/gh/jamesjwu/58/base 2025-12-04T10:14:41.2267743Z * [new branch] gh/jamesjwu/58/head -> origin/gh/jamesjwu/58/head 2025-12-04T10:14:41.2267812Z * [new branch] gh/jamesjwu/59/base -> origin/gh/jamesjwu/59/base 2025-12-04T10:14:41.2267882Z * [new branch] gh/jamesjwu/59/head -> origin/gh/jamesjwu/59/head 2025-12-04T10:14:41.2267952Z * [new branch] gh/jamesjwu/60/base -> origin/gh/jamesjwu/60/base 2025-12-04T10:14:41.2268022Z * [new branch] gh/jamesjwu/60/head -> origin/gh/jamesjwu/60/head 2025-12-04T10:14:41.2268092Z * [new branch] gh/jamesjwu/61/base -> origin/gh/jamesjwu/61/base 2025-12-04T10:14:41.2268160Z * [new branch] gh/jamesjwu/61/head -> origin/gh/jamesjwu/61/head 2025-12-04T10:14:41.2268232Z * [new branch] gh/jamesjwu/62/base -> origin/gh/jamesjwu/62/base 2025-12-04T10:14:41.2268300Z * [new branch] gh/jamesjwu/62/head -> origin/gh/jamesjwu/62/head 2025-12-04T10:14:41.2268369Z * [new branch] gh/jamesjwu/63/base -> origin/gh/jamesjwu/63/base 2025-12-04T10:14:41.2268439Z * [new branch] gh/jamesjwu/63/head -> origin/gh/jamesjwu/63/head 2025-12-04T10:14:41.2268507Z * [new branch] gh/jamesjwu/64/base -> origin/gh/jamesjwu/64/base 2025-12-04T10:14:41.2268577Z * [new branch] gh/jamesjwu/64/head -> origin/gh/jamesjwu/64/head 2025-12-04T10:14:41.2268648Z * [new branch] gh/jamesjwu/65/base -> origin/gh/jamesjwu/65/base 2025-12-04T10:14:41.2268718Z * [new branch] gh/jamesjwu/65/head -> origin/gh/jamesjwu/65/head 2025-12-04T10:14:41.2268787Z * [new branch] gh/janeyx99/165/base -> origin/gh/janeyx99/165/base 2025-12-04T10:14:41.2268860Z * [new branch] gh/janeyx99/165/head -> origin/gh/janeyx99/165/head 2025-12-04T10:14:41.2268929Z * [new branch] gh/janeyx99/165/orig -> origin/gh/janeyx99/165/orig 2025-12-04T10:14:41.2268998Z * [new branch] gh/janeyx99/201/base -> origin/gh/janeyx99/201/base 2025-12-04T10:14:41.2269068Z * [new branch] gh/janeyx99/201/head -> origin/gh/janeyx99/201/head 2025-12-04T10:14:41.2269137Z * [new branch] gh/janeyx99/201/orig -> origin/gh/janeyx99/201/orig 2025-12-04T10:14:41.2269237Z * [new branch] gh/janeyx99/225/base -> origin/gh/janeyx99/225/base 2025-12-04T10:14:41.2269310Z * [new branch] gh/janeyx99/225/head -> origin/gh/janeyx99/225/head 2025-12-04T10:14:41.2269378Z * [new branch] gh/janeyx99/225/orig -> origin/gh/janeyx99/225/orig 2025-12-04T10:14:41.2269448Z * [new branch] gh/janeyx99/299/base -> origin/gh/janeyx99/299/base 2025-12-04T10:14:41.2269520Z * [new branch] gh/janeyx99/299/head -> origin/gh/janeyx99/299/head 2025-12-04T10:14:41.2269589Z * [new branch] gh/janeyx99/299/orig -> origin/gh/janeyx99/299/orig 2025-12-04T10:14:41.2269659Z * [new branch] gh/janeyx99/302/base -> origin/gh/janeyx99/302/base 2025-12-04T10:14:41.2269728Z * [new branch] gh/janeyx99/302/head -> origin/gh/janeyx99/302/head 2025-12-04T10:14:41.2269796Z * [new branch] gh/janeyx99/303/base -> origin/gh/janeyx99/303/base 2025-12-04T10:14:41.2269870Z * [new branch] gh/janeyx99/303/head -> origin/gh/janeyx99/303/head 2025-12-04T10:14:41.2269939Z * [new branch] gh/janeyx99/305/base -> origin/gh/janeyx99/305/base 2025-12-04T10:14:41.2270008Z * [new branch] gh/janeyx99/305/head -> origin/gh/janeyx99/305/head 2025-12-04T10:14:41.2270108Z * [new branch] gh/janeyx99/306/base -> origin/gh/janeyx99/306/base 2025-12-04T10:14:41.2270178Z * [new branch] gh/janeyx99/306/head -> origin/gh/janeyx99/306/head 2025-12-04T10:14:41.2270247Z * [new branch] gh/janeyx99/314/base -> origin/gh/janeyx99/314/base 2025-12-04T10:14:41.2270317Z * [new branch] gh/janeyx99/314/head -> origin/gh/janeyx99/314/head 2025-12-04T10:14:41.2270386Z * [new branch] gh/janeyx99/314/orig -> origin/gh/janeyx99/314/orig 2025-12-04T10:14:41.2270456Z * [new branch] gh/janeyx99/315/base -> origin/gh/janeyx99/315/base 2025-12-04T10:14:41.2270527Z * [new branch] gh/janeyx99/315/head -> origin/gh/janeyx99/315/head 2025-12-04T10:14:41.2270635Z * [new branch] gh/janeyx99/315/orig -> origin/gh/janeyx99/315/orig 2025-12-04T10:14:41.2270706Z * [new branch] gh/janeyx99/316/base -> origin/gh/janeyx99/316/base 2025-12-04T10:14:41.2270778Z * [new branch] gh/janeyx99/316/head -> origin/gh/janeyx99/316/head 2025-12-04T10:14:41.2270846Z * [new branch] gh/janeyx99/316/orig -> origin/gh/janeyx99/316/orig 2025-12-04T10:14:41.2270916Z * [new branch] gh/janeyx99/317/base -> origin/gh/janeyx99/317/base 2025-12-04T10:14:41.2270988Z * [new branch] gh/janeyx99/317/head -> origin/gh/janeyx99/317/head 2025-12-04T10:14:41.2271058Z * [new branch] gh/janeyx99/317/orig -> origin/gh/janeyx99/317/orig 2025-12-04T10:14:41.2271127Z * [new branch] gh/janeyx99/325/base -> origin/gh/janeyx99/325/base 2025-12-04T10:14:41.2271199Z * [new branch] gh/janeyx99/325/head -> origin/gh/janeyx99/325/head 2025-12-04T10:14:41.2271268Z * [new branch] gh/janeyx99/325/orig -> origin/gh/janeyx99/325/orig 2025-12-04T10:14:41.2271338Z * [new branch] gh/janeyx99/327/base -> origin/gh/janeyx99/327/base 2025-12-04T10:14:41.2271409Z * [new branch] gh/janeyx99/327/head -> origin/gh/janeyx99/327/head 2025-12-04T10:14:41.2271478Z * [new branch] gh/janeyx99/327/orig -> origin/gh/janeyx99/327/orig 2025-12-04T10:14:41.2271548Z * [new branch] gh/janeyx99/328/base -> origin/gh/janeyx99/328/base 2025-12-04T10:14:41.2271618Z * [new branch] gh/janeyx99/328/head -> origin/gh/janeyx99/328/head 2025-12-04T10:14:41.2271686Z * [new branch] gh/janeyx99/328/orig -> origin/gh/janeyx99/328/orig 2025-12-04T10:14:41.2271758Z * [new branch] gh/janeyx99/329/base -> origin/gh/janeyx99/329/base 2025-12-04T10:14:41.2271869Z * [new branch] gh/janeyx99/329/head -> origin/gh/janeyx99/329/head 2025-12-04T10:14:41.2271937Z * [new branch] gh/janeyx99/329/orig -> origin/gh/janeyx99/329/orig 2025-12-04T10:14:41.2272008Z * [new branch] gh/janeyx99/330/base -> origin/gh/janeyx99/330/base 2025-12-04T10:14:41.2272078Z * [new branch] gh/janeyx99/330/head -> origin/gh/janeyx99/330/head 2025-12-04T10:14:41.2272146Z * [new branch] gh/janeyx99/330/orig -> origin/gh/janeyx99/330/orig 2025-12-04T10:14:41.2272216Z * [new branch] gh/janeyx99/331/base -> origin/gh/janeyx99/331/base 2025-12-04T10:14:41.2272285Z * [new branch] gh/janeyx99/331/head -> origin/gh/janeyx99/331/head 2025-12-04T10:14:41.2272354Z * [new branch] gh/janeyx99/331/orig -> origin/gh/janeyx99/331/orig 2025-12-04T10:14:41.2272425Z * [new branch] gh/janeyx99/332/base -> origin/gh/janeyx99/332/base 2025-12-04T10:14:41.2272497Z * [new branch] gh/janeyx99/332/head -> origin/gh/janeyx99/332/head 2025-12-04T10:14:41.2272567Z * [new branch] gh/janeyx99/332/orig -> origin/gh/janeyx99/332/orig 2025-12-04T10:14:41.2272638Z * [new branch] gh/janeyx99/333/base -> origin/gh/janeyx99/333/base 2025-12-04T10:14:41.2272755Z * [new branch] gh/janeyx99/333/head -> origin/gh/janeyx99/333/head 2025-12-04T10:14:41.2272825Z * [new branch] gh/janeyx99/333/orig -> origin/gh/janeyx99/333/orig 2025-12-04T10:14:41.2272894Z * [new branch] gh/janeyx99/88/base -> origin/gh/janeyx99/88/base 2025-12-04T10:14:41.2272962Z * [new branch] gh/janeyx99/88/head -> origin/gh/janeyx99/88/head 2025-12-04T10:14:41.2273034Z * [new branch] gh/janeyx99/88/orig -> origin/gh/janeyx99/88/orig 2025-12-04T10:14:41.2273103Z * [new branch] gh/jansel/360/base -> origin/gh/jansel/360/base 2025-12-04T10:14:41.2273174Z * [new branch] gh/jansel/360/head -> origin/gh/jansel/360/head 2025-12-04T10:14:41.2273245Z * [new branch] gh/jansel/451/base -> origin/gh/jansel/451/base 2025-12-04T10:14:41.2273311Z * [new branch] gh/jansel/451/head -> origin/gh/jansel/451/head 2025-12-04T10:14:41.2273381Z * [new branch] gh/jansel/451/orig -> origin/gh/jansel/451/orig 2025-12-04T10:14:41.2273449Z * [new branch] gh/jansel/462/base -> origin/gh/jansel/462/base 2025-12-04T10:14:41.2273516Z * [new branch] gh/jansel/462/head -> origin/gh/jansel/462/head 2025-12-04T10:14:41.2273584Z * [new branch] gh/jansel/462/orig -> origin/gh/jansel/462/orig 2025-12-04T10:14:41.2273652Z * [new branch] gh/jansel/533/base -> origin/gh/jansel/533/base 2025-12-04T10:14:41.2273719Z * [new branch] gh/jansel/533/head -> origin/gh/jansel/533/head 2025-12-04T10:14:41.2273786Z * [new branch] gh/jansel/533/orig -> origin/gh/jansel/533/orig 2025-12-04T10:14:41.2273854Z * [new branch] gh/jansel/552/base -> origin/gh/jansel/552/base 2025-12-04T10:14:41.2273921Z * [new branch] gh/jansel/552/head -> origin/gh/jansel/552/head 2025-12-04T10:14:41.2273988Z * [new branch] gh/jansel/552/orig -> origin/gh/jansel/552/orig 2025-12-04T10:14:41.2274057Z * [new branch] gh/jansel/553/base -> origin/gh/jansel/553/base 2025-12-04T10:14:41.2274123Z * [new branch] gh/jansel/553/head -> origin/gh/jansel/553/head 2025-12-04T10:14:41.2274191Z * [new branch] gh/jansel/553/orig -> origin/gh/jansel/553/orig 2025-12-04T10:14:41.2274261Z * [new branch] gh/jansel/554/base -> origin/gh/jansel/554/base 2025-12-04T10:14:41.2274327Z * [new branch] gh/jansel/554/head -> origin/gh/jansel/554/head 2025-12-04T10:14:41.2274424Z * [new branch] gh/jansel/554/orig -> origin/gh/jansel/554/orig 2025-12-04T10:14:41.2274492Z * [new branch] gh/jansel/555/base -> origin/gh/jansel/555/base 2025-12-04T10:14:41.2274559Z * [new branch] gh/jansel/555/head -> origin/gh/jansel/555/head 2025-12-04T10:14:41.2274628Z * [new branch] gh/jansel/555/orig -> origin/gh/jansel/555/orig 2025-12-04T10:14:41.2274696Z * [new branch] gh/jansel/556/base -> origin/gh/jansel/556/base 2025-12-04T10:14:41.2274763Z * [new branch] gh/jansel/556/head -> origin/gh/jansel/556/head 2025-12-04T10:14:41.2274831Z * [new branch] gh/jansel/556/orig -> origin/gh/jansel/556/orig 2025-12-04T10:14:41.2274897Z * [new branch] gh/jansel/557/base -> origin/gh/jansel/557/base 2025-12-04T10:14:41.2274963Z * [new branch] gh/jansel/557/head -> origin/gh/jansel/557/head 2025-12-04T10:14:41.2275034Z * [new branch] gh/jansel/557/orig -> origin/gh/jansel/557/orig 2025-12-04T10:14:41.2275101Z * [new branch] gh/jansel/558/base -> origin/gh/jansel/558/base 2025-12-04T10:14:41.2275169Z * [new branch] gh/jansel/558/head -> origin/gh/jansel/558/head 2025-12-04T10:14:41.2275263Z * [new branch] gh/jansel/558/orig -> origin/gh/jansel/558/orig 2025-12-04T10:14:41.2275332Z * [new branch] gh/jansel/559/base -> origin/gh/jansel/559/base 2025-12-04T10:14:41.2275399Z * [new branch] gh/jansel/559/head -> origin/gh/jansel/559/head 2025-12-04T10:14:41.2275467Z * [new branch] gh/jansel/559/orig -> origin/gh/jansel/559/orig 2025-12-04T10:14:41.2275534Z * [new branch] gh/jansel/560/base -> origin/gh/jansel/560/base 2025-12-04T10:14:41.2275600Z * [new branch] gh/jansel/560/head -> origin/gh/jansel/560/head 2025-12-04T10:14:41.2275670Z * [new branch] gh/jansel/560/orig -> origin/gh/jansel/560/orig 2025-12-04T10:14:41.2275737Z * [new branch] gh/jansel/561/base -> origin/gh/jansel/561/base 2025-12-04T10:14:41.2275805Z * [new branch] gh/jansel/561/head -> origin/gh/jansel/561/head 2025-12-04T10:14:41.2275873Z * [new branch] gh/jansel/561/orig -> origin/gh/jansel/561/orig 2025-12-04T10:14:41.2275940Z * [new branch] gh/jansel/562/base -> origin/gh/jansel/562/base 2025-12-04T10:14:41.2276008Z * [new branch] gh/jansel/562/head -> origin/gh/jansel/562/head 2025-12-04T10:14:41.2276074Z * [new branch] gh/jansel/562/orig -> origin/gh/jansel/562/orig 2025-12-04T10:14:41.2276141Z * [new branch] gh/jansel/563/base -> origin/gh/jansel/563/base 2025-12-04T10:14:41.2276209Z * [new branch] gh/jansel/563/head -> origin/gh/jansel/563/head 2025-12-04T10:14:41.2276278Z * [new branch] gh/jansel/563/orig -> origin/gh/jansel/563/orig 2025-12-04T10:14:41.2276345Z * [new branch] gh/jansel/564/base -> origin/gh/jansel/564/base 2025-12-04T10:14:41.2276413Z * [new branch] gh/jansel/564/head -> origin/gh/jansel/564/head 2025-12-04T10:14:41.2276481Z * [new branch] gh/jansel/564/orig -> origin/gh/jansel/564/orig 2025-12-04T10:14:41.2276548Z * [new branch] gh/jansel/565/base -> origin/gh/jansel/565/base 2025-12-04T10:14:41.2276617Z * [new branch] gh/jansel/565/head -> origin/gh/jansel/565/head 2025-12-04T10:14:41.2276686Z * [new branch] gh/jansel/565/orig -> origin/gh/jansel/565/orig 2025-12-04T10:14:41.2276754Z * [new branch] gh/jansel/566/base -> origin/gh/jansel/566/base 2025-12-04T10:14:41.2276822Z * [new branch] gh/jansel/566/head -> origin/gh/jansel/566/head 2025-12-04T10:14:41.2276917Z * [new branch] gh/jansel/566/orig -> origin/gh/jansel/566/orig 2025-12-04T10:14:41.2276985Z * [new branch] gh/jansel/567/base -> origin/gh/jansel/567/base 2025-12-04T10:14:41.2277053Z * [new branch] gh/jansel/567/head -> origin/gh/jansel/567/head 2025-12-04T10:14:41.2277121Z * [new branch] gh/jansel/567/orig -> origin/gh/jansel/567/orig 2025-12-04T10:14:41.2277189Z * [new branch] gh/jansel/568/base -> origin/gh/jansel/568/base 2025-12-04T10:14:41.2277258Z * [new branch] gh/jansel/568/head -> origin/gh/jansel/568/head 2025-12-04T10:14:41.2277325Z * [new branch] gh/jansel/568/orig -> origin/gh/jansel/568/orig 2025-12-04T10:14:41.2277392Z * [new branch] gh/jansel/569/base -> origin/gh/jansel/569/base 2025-12-04T10:14:41.2277461Z * [new branch] gh/jansel/569/head -> origin/gh/jansel/569/head 2025-12-04T10:14:41.2277529Z * [new branch] gh/jansel/569/orig -> origin/gh/jansel/569/orig 2025-12-04T10:14:41.2277596Z * [new branch] gh/jansel/570/base -> origin/gh/jansel/570/base 2025-12-04T10:14:41.2277665Z * [new branch] gh/jansel/570/head -> origin/gh/jansel/570/head 2025-12-04T10:14:41.2277756Z * [new branch] gh/jansel/570/orig -> origin/gh/jansel/570/orig 2025-12-04T10:14:41.2277825Z * [new branch] gh/jansel/571/base -> origin/gh/jansel/571/base 2025-12-04T10:14:41.2277893Z * [new branch] gh/jansel/571/head -> origin/gh/jansel/571/head 2025-12-04T10:14:41.2277961Z * [new branch] gh/jansel/571/orig -> origin/gh/jansel/571/orig 2025-12-04T10:14:41.2278029Z * [new branch] gh/jansel/572/base -> origin/gh/jansel/572/base 2025-12-04T10:14:41.2278095Z * [new branch] gh/jansel/572/head -> origin/gh/jansel/572/head 2025-12-04T10:14:41.2278164Z * [new branch] gh/jansel/572/orig -> origin/gh/jansel/572/orig 2025-12-04T10:14:41.2278231Z * [new branch] gh/jansel/573/base -> origin/gh/jansel/573/base 2025-12-04T10:14:41.2278297Z * [new branch] gh/jansel/573/head -> origin/gh/jansel/573/head 2025-12-04T10:14:41.2278363Z * [new branch] gh/jansel/573/orig -> origin/gh/jansel/573/orig 2025-12-04T10:14:41.2278431Z * [new branch] gh/jansel/574/base -> origin/gh/jansel/574/base 2025-12-04T10:14:41.2278498Z * [new branch] gh/jansel/574/head -> origin/gh/jansel/574/head 2025-12-04T10:14:41.2278565Z * [new branch] gh/jansel/574/orig -> origin/gh/jansel/574/orig 2025-12-04T10:14:41.2278634Z * [new branch] gh/jansel/575/base -> origin/gh/jansel/575/base 2025-12-04T10:14:41.2278699Z * [new branch] gh/jansel/575/head -> origin/gh/jansel/575/head 2025-12-04T10:14:41.2278766Z * [new branch] gh/jansel/575/orig -> origin/gh/jansel/575/orig 2025-12-04T10:14:41.2278835Z * [new branch] gh/jansel/576/base -> origin/gh/jansel/576/base 2025-12-04T10:14:41.2278902Z * [new branch] gh/jansel/576/head -> origin/gh/jansel/576/head 2025-12-04T10:14:41.2278968Z * [new branch] gh/jansel/576/orig -> origin/gh/jansel/576/orig 2025-12-04T10:14:41.2279050Z * [new branch] gh/jbschlosser/247/base -> origin/gh/jbschlosser/247/base 2025-12-04T10:14:41.2279130Z * [new branch] gh/jbschlosser/247/head -> origin/gh/jbschlosser/247/head 2025-12-04T10:14:41.2279207Z * [new branch] gh/jbschlosser/247/orig -> origin/gh/jbschlosser/247/orig 2025-12-04T10:14:41.2279283Z * [new branch] gh/jbschlosser/250/base -> origin/gh/jbschlosser/250/base 2025-12-04T10:14:41.2279358Z * [new branch] gh/jbschlosser/250/head -> origin/gh/jbschlosser/250/head 2025-12-04T10:14:41.2279462Z * [new branch] gh/jbschlosser/250/orig -> origin/gh/jbschlosser/250/orig 2025-12-04T10:14:41.2279534Z * [new branch] gh/jerryzh168/1/base -> origin/gh/jerryzh168/1/base 2025-12-04T10:14:41.2279607Z * [new branch] gh/jerryzh168/1/head -> origin/gh/jerryzh168/1/head 2025-12-04T10:14:41.2279681Z * [new branch] gh/jerryzh168/1/orig -> origin/gh/jerryzh168/1/orig 2025-12-04T10:14:41.2279752Z * [new branch] gh/jiayisunx/59/base -> origin/gh/jiayisunx/59/base 2025-12-04T10:14:41.2279824Z * [new branch] gh/jiayisunx/59/head -> origin/gh/jiayisunx/59/head 2025-12-04T10:14:41.2279895Z * [new branch] gh/jiayisunx/59/orig -> origin/gh/jiayisunx/59/orig 2025-12-04T10:14:41.2279965Z * [new branch] gh/jiayisunx/61/base -> origin/gh/jiayisunx/61/base 2025-12-04T10:14:41.2280036Z * [new branch] gh/jiayisunx/61/head -> origin/gh/jiayisunx/61/head 2025-12-04T10:14:41.2280109Z * [new branch] gh/jiayisunx/61/orig -> origin/gh/jiayisunx/61/orig 2025-12-04T10:14:41.2280179Z * [new branch] gh/jiayisunx/68/base -> origin/gh/jiayisunx/68/base 2025-12-04T10:14:41.2280252Z * [new branch] gh/jiayisunx/68/head -> origin/gh/jiayisunx/68/head 2025-12-04T10:14:41.2280350Z * [new branch] gh/jiayisunx/68/orig -> origin/gh/jiayisunx/68/orig 2025-12-04T10:14:41.2280421Z * [new branch] gh/jiayisunx/77/base -> origin/gh/jiayisunx/77/base 2025-12-04T10:14:41.2280491Z * [new branch] gh/jiayisunx/77/head -> origin/gh/jiayisunx/77/head 2025-12-04T10:14:41.2280562Z * [new branch] gh/jiayisunx/77/orig -> origin/gh/jiayisunx/77/orig 2025-12-04T10:14:41.2280676Z * [new branch] gh/jiayisunx/78/base -> origin/gh/jiayisunx/78/base 2025-12-04T10:14:41.2280747Z * [new branch] gh/jiayisunx/78/head -> origin/gh/jiayisunx/78/head 2025-12-04T10:14:41.2280821Z * [new branch] gh/jiayisunx/78/orig -> origin/gh/jiayisunx/78/orig 2025-12-04T10:14:41.2280892Z * [new branch] gh/jiayisunx/79/base -> origin/gh/jiayisunx/79/base 2025-12-04T10:14:41.2280964Z * [new branch] gh/jiayisunx/79/head -> origin/gh/jiayisunx/79/head 2025-12-04T10:14:41.2281036Z * [new branch] gh/jiayisunx/79/orig -> origin/gh/jiayisunx/79/orig 2025-12-04T10:14:41.2281107Z * [new branch] gh/jiayisunx/82/base -> origin/gh/jiayisunx/82/base 2025-12-04T10:14:41.2281179Z * [new branch] gh/jiayisunx/82/head -> origin/gh/jiayisunx/82/head 2025-12-04T10:14:41.2281250Z * [new branch] gh/jiayisunx/82/orig -> origin/gh/jiayisunx/82/orig 2025-12-04T10:14:41.2281320Z * [new branch] gh/jiayisunx/83/base -> origin/gh/jiayisunx/83/base 2025-12-04T10:14:41.2281391Z * [new branch] gh/jiayisunx/83/head -> origin/gh/jiayisunx/83/head 2025-12-04T10:14:41.2281463Z * [new branch] gh/jiayisunx/83/orig -> origin/gh/jiayisunx/83/orig 2025-12-04T10:14:41.2281534Z * [new branch] gh/jiayisunx/84/base -> origin/gh/jiayisunx/84/base 2025-12-04T10:14:41.2281607Z * [new branch] gh/jiayisunx/84/head -> origin/gh/jiayisunx/84/head 2025-12-04T10:14:41.2281679Z * [new branch] gh/jiayisunx/84/orig -> origin/gh/jiayisunx/84/orig 2025-12-04T10:14:41.2281750Z * [new branch] gh/jiayisunx/85/base -> origin/gh/jiayisunx/85/base 2025-12-04T10:14:41.2281821Z * [new branch] gh/jiayisunx/85/head -> origin/gh/jiayisunx/85/head 2025-12-04T10:14:41.2281891Z * [new branch] gh/jiayisunx/85/orig -> origin/gh/jiayisunx/85/orig 2025-12-04T10:14:41.2281960Z * [new branch] gh/jiayisunx/86/base -> origin/gh/jiayisunx/86/base 2025-12-04T10:14:41.2282032Z * [new branch] gh/jiayisunx/86/head -> origin/gh/jiayisunx/86/head 2025-12-04T10:14:41.2282149Z * [new branch] gh/jiayisunx/86/orig -> origin/gh/jiayisunx/86/orig 2025-12-04T10:14:41.2282219Z * [new branch] gh/jiayisunx/87/base -> origin/gh/jiayisunx/87/base 2025-12-04T10:14:41.2282290Z * [new branch] gh/jiayisunx/87/head -> origin/gh/jiayisunx/87/head 2025-12-04T10:14:41.2282362Z * [new branch] gh/jiayisunx/87/orig -> origin/gh/jiayisunx/87/orig 2025-12-04T10:14:41.2282437Z * [new branch] gh/jiayisunx/88/base -> origin/gh/jiayisunx/88/base 2025-12-04T10:14:41.2282507Z * [new branch] gh/jiayisunx/88/head -> origin/gh/jiayisunx/88/head 2025-12-04T10:14:41.2282577Z * [new branch] gh/jiayisunx/88/orig -> origin/gh/jiayisunx/88/orig 2025-12-04T10:14:41.2282647Z * [new branch] gh/jiayisunx/89/base -> origin/gh/jiayisunx/89/base 2025-12-04T10:14:41.2282717Z * [new branch] gh/jiayisunx/89/head -> origin/gh/jiayisunx/89/head 2025-12-04T10:14:41.2282897Z * [new branch] gh/jiayisunx/89/orig -> origin/gh/jiayisunx/89/orig 2025-12-04T10:14:41.2282970Z * [new branch] gh/jiayisunx/90/base -> origin/gh/jiayisunx/90/base 2025-12-04T10:14:41.2283042Z * [new branch] gh/jiayisunx/90/head -> origin/gh/jiayisunx/90/head 2025-12-04T10:14:41.2283168Z * [new branch] gh/jiayisunx/90/orig -> origin/gh/jiayisunx/90/orig 2025-12-04T10:14:41.2283249Z * [new branch] gh/jjwu@meta.com/1/base -> origin/gh/jjwu@meta.com/1/base 2025-12-04T10:14:41.2283323Z * [new branch] gh/jjwu@meta.com/1/head -> origin/gh/jjwu@meta.com/1/head 2025-12-04T10:14:41.2283394Z * [new branch] gh/jturney/1/base -> origin/gh/jturney/1/base 2025-12-04T10:14:41.2283464Z * [new branch] gh/jturney/1/head -> origin/gh/jturney/1/head 2025-12-04T10:14:41.2283533Z * [new branch] gh/jturney/1/orig -> origin/gh/jturney/1/orig 2025-12-04T10:14:41.2283600Z * [new branch] gh/jturney/2/base -> origin/gh/jturney/2/base 2025-12-04T10:14:41.2283668Z * [new branch] gh/jturney/2/head -> origin/gh/jturney/2/head 2025-12-04T10:14:41.2283735Z * [new branch] gh/jturney/2/orig -> origin/gh/jturney/2/orig 2025-12-04T10:14:41.2283813Z * [new branch] gh/karthickai/10/base -> origin/gh/karthickai/10/base 2025-12-04T10:14:41.2283890Z * [new branch] gh/karthickai/10/head -> origin/gh/karthickai/10/head 2025-12-04T10:14:41.2283963Z * [new branch] gh/karthickai/10/orig -> origin/gh/karthickai/10/orig 2025-12-04T10:14:41.2284036Z * [new branch] gh/karthickai/11/base -> origin/gh/karthickai/11/base 2025-12-04T10:14:41.2284112Z * [new branch] gh/karthickai/11/head -> origin/gh/karthickai/11/head 2025-12-04T10:14:41.2284188Z * [new branch] gh/karthickai/11/orig -> origin/gh/karthickai/11/orig 2025-12-04T10:14:41.2284264Z * [new branch] gh/karthickai/12/base -> origin/gh/karthickai/12/base 2025-12-04T10:14:41.2284336Z * [new branch] gh/karthickai/12/head -> origin/gh/karthickai/12/head 2025-12-04T10:14:41.2284408Z * [new branch] gh/karthickai/12/orig -> origin/gh/karthickai/12/orig 2025-12-04T10:14:41.2284484Z * [new branch] gh/karthickai/13/base -> origin/gh/karthickai/13/base 2025-12-04T10:14:41.2284556Z * [new branch] gh/karthickai/13/head -> origin/gh/karthickai/13/head 2025-12-04T10:14:41.2284628Z * [new branch] gh/karthickai/13/orig -> origin/gh/karthickai/13/orig 2025-12-04T10:14:41.2284701Z * [new branch] gh/karthickai/14/base -> origin/gh/karthickai/14/base 2025-12-04T10:14:41.2284772Z * [new branch] gh/karthickai/14/head -> origin/gh/karthickai/14/head 2025-12-04T10:14:41.2284873Z * [new branch] gh/karthickai/14/orig -> origin/gh/karthickai/14/orig 2025-12-04T10:14:41.2284946Z * [new branch] gh/karthickai/15/base -> origin/gh/karthickai/15/base 2025-12-04T10:14:41.2285019Z * [new branch] gh/karthickai/15/head -> origin/gh/karthickai/15/head 2025-12-04T10:14:41.2285091Z * [new branch] gh/karthickai/15/orig -> origin/gh/karthickai/15/orig 2025-12-04T10:14:41.2285165Z * [new branch] gh/karthickai/16/base -> origin/gh/karthickai/16/base 2025-12-04T10:14:41.2285238Z * [new branch] gh/karthickai/16/head -> origin/gh/karthickai/16/head 2025-12-04T10:14:41.2285309Z * [new branch] gh/karthickai/16/orig -> origin/gh/karthickai/16/orig 2025-12-04T10:14:41.2285382Z * [new branch] gh/karthickai/17/base -> origin/gh/karthickai/17/base 2025-12-04T10:14:41.2285453Z * [new branch] gh/karthickai/17/head -> origin/gh/karthickai/17/head 2025-12-04T10:14:41.2285527Z * [new branch] gh/karthickai/17/orig -> origin/gh/karthickai/17/orig 2025-12-04T10:14:41.2285602Z * [new branch] gh/karthickai/18/base -> origin/gh/karthickai/18/base 2025-12-04T10:14:41.2285675Z * [new branch] gh/karthickai/18/head -> origin/gh/karthickai/18/head 2025-12-04T10:14:41.2285779Z * [new branch] gh/karthickai/18/orig -> origin/gh/karthickai/18/orig 2025-12-04T10:14:41.2285851Z * [new branch] gh/karthickai/19/base -> origin/gh/karthickai/19/base 2025-12-04T10:14:41.2285923Z * [new branch] gh/karthickai/19/head -> origin/gh/karthickai/19/head 2025-12-04T10:14:41.2285997Z * [new branch] gh/karthickai/19/orig -> origin/gh/karthickai/19/orig 2025-12-04T10:14:41.2286069Z * [new branch] gh/karthickai/20/base -> origin/gh/karthickai/20/base 2025-12-04T10:14:41.2286141Z * [new branch] gh/karthickai/20/head -> origin/gh/karthickai/20/head 2025-12-04T10:14:41.2286215Z * [new branch] gh/karthickai/20/orig -> origin/gh/karthickai/20/orig 2025-12-04T10:14:41.2286287Z * [new branch] gh/karthickai/21/base -> origin/gh/karthickai/21/base 2025-12-04T10:14:41.2286361Z * [new branch] gh/karthickai/21/head -> origin/gh/karthickai/21/head 2025-12-04T10:14:41.2286436Z * [new branch] gh/karthickai/21/orig -> origin/gh/karthickai/21/orig 2025-12-04T10:14:41.2286508Z * [new branch] gh/karthickai/22/base -> origin/gh/karthickai/22/base 2025-12-04T10:14:41.2286580Z * [new branch] gh/karthickai/22/head -> origin/gh/karthickai/22/head 2025-12-04T10:14:41.2286654Z * [new branch] gh/karthickai/22/orig -> origin/gh/karthickai/22/orig 2025-12-04T10:14:41.2286726Z * [new branch] gh/karthickai/23/base -> origin/gh/karthickai/23/base 2025-12-04T10:14:41.2286797Z * [new branch] gh/karthickai/23/head -> origin/gh/karthickai/23/head 2025-12-04T10:14:41.2286871Z * [new branch] gh/karthickai/23/orig -> origin/gh/karthickai/23/orig 2025-12-04T10:14:41.2286944Z * [new branch] gh/karthickai/24/base -> origin/gh/karthickai/24/base 2025-12-04T10:14:41.2287017Z * [new branch] gh/karthickai/24/head -> origin/gh/karthickai/24/head 2025-12-04T10:14:41.2287092Z * [new branch] gh/karthickai/24/orig -> origin/gh/karthickai/24/orig 2025-12-04T10:14:41.2287164Z * [new branch] gh/karthickai/25/base -> origin/gh/karthickai/25/base 2025-12-04T10:14:41.2287236Z * [new branch] gh/karthickai/25/head -> origin/gh/karthickai/25/head 2025-12-04T10:14:41.2287309Z * [new branch] gh/karthickai/25/orig -> origin/gh/karthickai/25/orig 2025-12-04T10:14:41.2287381Z * [new branch] gh/karthickai/26/base -> origin/gh/karthickai/26/base 2025-12-04T10:14:41.2287454Z * [new branch] gh/karthickai/26/head -> origin/gh/karthickai/26/head 2025-12-04T10:14:41.2287551Z * [new branch] gh/karthickai/26/orig -> origin/gh/karthickai/26/orig 2025-12-04T10:14:41.2287624Z * [new branch] gh/karthickai/6/base -> origin/gh/karthickai/6/base 2025-12-04T10:14:41.2287696Z * [new branch] gh/karthickai/6/head -> origin/gh/karthickai/6/head 2025-12-04T10:14:41.2287769Z * [new branch] gh/karthickai/6/orig -> origin/gh/karthickai/6/orig 2025-12-04T10:14:41.2287837Z * [new branch] gh/krocki/1/base -> origin/gh/krocki/1/base 2025-12-04T10:14:41.2287905Z * [new branch] gh/krocki/1/head -> origin/gh/krocki/1/head 2025-12-04T10:14:41.2287972Z * [new branch] gh/krocki/1/orig -> origin/gh/krocki/1/orig 2025-12-04T10:14:41.2288037Z * [new branch] gh/krocki/2/base -> origin/gh/krocki/2/base 2025-12-04T10:14:41.2288103Z * [new branch] gh/krocki/2/head -> origin/gh/krocki/2/head 2025-12-04T10:14:41.2288170Z * [new branch] gh/krocki/2/orig -> origin/gh/krocki/2/orig 2025-12-04T10:14:41.2288248Z * [new branch] gh/kurtamohler/60/base -> origin/gh/kurtamohler/60/base 2025-12-04T10:14:41.2288326Z * [new branch] gh/kurtamohler/60/head -> origin/gh/kurtamohler/60/head 2025-12-04T10:14:41.2288429Z * [new branch] gh/kurtamohler/60/orig -> origin/gh/kurtamohler/60/orig 2025-12-04T10:14:41.2288506Z * [new branch] gh/kurtamohler/61/base -> origin/gh/kurtamohler/61/base 2025-12-04T10:14:41.2288581Z * [new branch] gh/kurtamohler/61/head -> origin/gh/kurtamohler/61/head 2025-12-04T10:14:41.2288654Z * [new branch] gh/kurtamohler/61/orig -> origin/gh/kurtamohler/61/orig 2025-12-04T10:14:41.2288728Z * [new branch] gh/kurtamohler/62/base -> origin/gh/kurtamohler/62/base 2025-12-04T10:14:41.2288804Z * [new branch] gh/kurtamohler/62/head -> origin/gh/kurtamohler/62/head 2025-12-04T10:14:41.2288877Z * [new branch] gh/kurtamohler/62/orig -> origin/gh/kurtamohler/62/orig 2025-12-04T10:14:41.2288954Z * [new branch] gh/kurtamohler/63/base -> origin/gh/kurtamohler/63/base 2025-12-04T10:14:41.2289026Z * [new branch] gh/kurtamohler/63/head -> origin/gh/kurtamohler/63/head 2025-12-04T10:14:41.2289101Z * [new branch] gh/kurtamohler/63/orig -> origin/gh/kurtamohler/63/orig 2025-12-04T10:14:41.2289176Z * [new branch] gh/kurtamohler/64/base -> origin/gh/kurtamohler/64/base 2025-12-04T10:14:41.2289251Z * [new branch] gh/kurtamohler/64/head -> origin/gh/kurtamohler/64/head 2025-12-04T10:14:41.2289323Z * [new branch] gh/kurtamohler/64/orig -> origin/gh/kurtamohler/64/orig 2025-12-04T10:14:41.2289397Z * [new branch] gh/kurtamohler/65/base -> origin/gh/kurtamohler/65/base 2025-12-04T10:14:41.2289471Z * [new branch] gh/kurtamohler/65/head -> origin/gh/kurtamohler/65/head 2025-12-04T10:14:41.2289544Z * [new branch] gh/kurtamohler/65/orig -> origin/gh/kurtamohler/65/orig 2025-12-04T10:14:41.2289617Z * [new branch] gh/kurtamohler/66/base -> origin/gh/kurtamohler/66/base 2025-12-04T10:14:41.2289692Z * [new branch] gh/kurtamohler/66/head -> origin/gh/kurtamohler/66/head 2025-12-04T10:14:41.2289765Z * [new branch] gh/kurtamohler/66/orig -> origin/gh/kurtamohler/66/orig 2025-12-04T10:14:41.2289842Z * [new branch] gh/kurtamohler/67/base -> origin/gh/kurtamohler/67/base 2025-12-04T10:14:41.2289915Z * [new branch] gh/kurtamohler/67/head -> origin/gh/kurtamohler/67/head 2025-12-04T10:14:41.2289988Z * [new branch] gh/kurtamohler/67/orig -> origin/gh/kurtamohler/67/orig 2025-12-04T10:14:41.2290059Z * [new branch] gh/kwen2501/130/base -> origin/gh/kwen2501/130/base 2025-12-04T10:14:41.2290164Z * [new branch] gh/kwen2501/130/head -> origin/gh/kwen2501/130/head 2025-12-04T10:14:41.2290233Z * [new branch] gh/kwen2501/130/orig -> origin/gh/kwen2501/130/orig 2025-12-04T10:14:41.2290303Z * [new branch] gh/kwen2501/170/base -> origin/gh/kwen2501/170/base 2025-12-04T10:14:41.2290374Z * [new branch] gh/kwen2501/170/head -> origin/gh/kwen2501/170/head 2025-12-04T10:14:41.2290445Z * [new branch] gh/kwen2501/187/base -> origin/gh/kwen2501/187/base 2025-12-04T10:14:41.2290515Z * [new branch] gh/kwen2501/187/head -> origin/gh/kwen2501/187/head 2025-12-04T10:14:41.2290584Z * [new branch] gh/kwen2501/187/orig -> origin/gh/kwen2501/187/orig 2025-12-04T10:14:41.2290698Z * [new branch] gh/kwen2501/188/base -> origin/gh/kwen2501/188/base 2025-12-04T10:14:41.2290768Z * [new branch] gh/kwen2501/188/head -> origin/gh/kwen2501/188/head 2025-12-04T10:14:41.2290837Z * [new branch] gh/kwen2501/188/orig -> origin/gh/kwen2501/188/orig 2025-12-04T10:14:41.2290907Z * [new branch] gh/kwen2501/211/base -> origin/gh/kwen2501/211/base 2025-12-04T10:14:41.2290975Z * [new branch] gh/kwen2501/211/head -> origin/gh/kwen2501/211/head 2025-12-04T10:14:41.2291093Z * [new branch] gh/kwen2501/224/base -> origin/gh/kwen2501/224/base 2025-12-04T10:14:41.2291164Z * [new branch] gh/kwen2501/224/head -> origin/gh/kwen2501/224/head 2025-12-04T10:14:41.2291233Z * [new branch] gh/kwen2501/224/orig -> origin/gh/kwen2501/224/orig 2025-12-04T10:14:41.2291302Z * [new branch] gh/kwen2501/228/base -> origin/gh/kwen2501/228/base 2025-12-04T10:14:41.2291371Z * [new branch] gh/kwen2501/228/head -> origin/gh/kwen2501/228/head 2025-12-04T10:14:41.2291440Z * [new branch] gh/kwen2501/228/orig -> origin/gh/kwen2501/228/orig 2025-12-04T10:14:41.2291510Z * [new branch] gh/kwen2501/234/base -> origin/gh/kwen2501/234/base 2025-12-04T10:14:41.2291579Z * [new branch] gh/kwen2501/234/head -> origin/gh/kwen2501/234/head 2025-12-04T10:14:41.2291646Z * [new branch] gh/kwen2501/234/orig -> origin/gh/kwen2501/234/orig 2025-12-04T10:14:41.2291716Z * [new branch] gh/kwen2501/235/base -> origin/gh/kwen2501/235/base 2025-12-04T10:14:41.2291785Z * [new branch] gh/kwen2501/235/head -> origin/gh/kwen2501/235/head 2025-12-04T10:14:41.2291853Z * [new branch] gh/kwen2501/235/orig -> origin/gh/kwen2501/235/orig 2025-12-04T10:14:41.2291921Z * [new branch] gh/kwen2501/236/base -> origin/gh/kwen2501/236/base 2025-12-04T10:14:41.2291990Z * [new branch] gh/kwen2501/236/head -> origin/gh/kwen2501/236/head 2025-12-04T10:14:41.2292058Z * [new branch] gh/kwen2501/236/orig -> origin/gh/kwen2501/236/orig 2025-12-04T10:14:41.2292128Z * [new branch] gh/kwen2501/237/base -> origin/gh/kwen2501/237/base 2025-12-04T10:14:41.2292197Z * [new branch] gh/kwen2501/237/head -> origin/gh/kwen2501/237/head 2025-12-04T10:14:41.2292265Z * [new branch] gh/kwen2501/237/orig -> origin/gh/kwen2501/237/orig 2025-12-04T10:14:41.2292336Z * [new branch] gh/kwen2501/238/base -> origin/gh/kwen2501/238/base 2025-12-04T10:14:41.2292403Z * [new branch] gh/kwen2501/238/head -> origin/gh/kwen2501/238/head 2025-12-04T10:14:41.2292471Z * [new branch] gh/kwen2501/238/orig -> origin/gh/kwen2501/238/orig 2025-12-04T10:14:41.2292540Z * [new branch] gh/kwen2501/240/base -> origin/gh/kwen2501/240/base 2025-12-04T10:14:41.2292607Z * [new branch] gh/kwen2501/240/head -> origin/gh/kwen2501/240/head 2025-12-04T10:14:41.2292676Z * [new branch] gh/kwen2501/240/orig -> origin/gh/kwen2501/240/orig 2025-12-04T10:14:41.2292794Z * [new branch] gh/kwen2501/241/base -> origin/gh/kwen2501/241/base 2025-12-04T10:14:41.2292862Z * [new branch] gh/kwen2501/241/head -> origin/gh/kwen2501/241/head 2025-12-04T10:14:41.2292932Z * [new branch] gh/kwen2501/241/orig -> origin/gh/kwen2501/241/orig 2025-12-04T10:14:41.2293002Z * [new branch] gh/kwen2501/247/base -> origin/gh/kwen2501/247/base 2025-12-04T10:14:41.2293070Z * [new branch] gh/kwen2501/247/head -> origin/gh/kwen2501/247/head 2025-12-04T10:14:41.2293138Z * [new branch] gh/kwen2501/247/orig -> origin/gh/kwen2501/247/orig 2025-12-04T10:14:41.2293207Z * [new branch] gh/kwen2501/252/base -> origin/gh/kwen2501/252/base 2025-12-04T10:14:41.2293275Z * [new branch] gh/kwen2501/252/head -> origin/gh/kwen2501/252/head 2025-12-04T10:14:41.2293344Z * [new branch] gh/kwen2501/252/orig -> origin/gh/kwen2501/252/orig 2025-12-04T10:14:41.2293413Z * [new branch] gh/kwen2501/259/base -> origin/gh/kwen2501/259/base 2025-12-04T10:14:41.2293481Z * [new branch] gh/kwen2501/259/head -> origin/gh/kwen2501/259/head 2025-12-04T10:14:41.2293549Z * [new branch] gh/kwen2501/259/orig -> origin/gh/kwen2501/259/orig 2025-12-04T10:14:41.2293646Z * [new branch] gh/kwen2501/260/base -> origin/gh/kwen2501/260/base 2025-12-04T10:14:41.2293714Z * [new branch] gh/kwen2501/260/head -> origin/gh/kwen2501/260/head 2025-12-04T10:14:41.2293784Z * [new branch] gh/kwen2501/260/orig -> origin/gh/kwen2501/260/orig 2025-12-04T10:14:41.2293852Z * [new branch] gh/kwen2501/268/base -> origin/gh/kwen2501/268/base 2025-12-04T10:14:41.2293920Z * [new branch] gh/kwen2501/268/head -> origin/gh/kwen2501/268/head 2025-12-04T10:14:41.2293991Z * [new branch] gh/kwen2501/268/orig -> origin/gh/kwen2501/268/orig 2025-12-04T10:14:41.2294059Z * [new branch] gh/kwen2501/269/base -> origin/gh/kwen2501/269/base 2025-12-04T10:14:41.2294127Z * [new branch] gh/kwen2501/269/head -> origin/gh/kwen2501/269/head 2025-12-04T10:14:41.2294196Z * [new branch] gh/kwen2501/269/orig -> origin/gh/kwen2501/269/orig 2025-12-04T10:14:41.2294266Z * [new branch] gh/kwen2501/270/base -> origin/gh/kwen2501/270/base 2025-12-04T10:14:41.2294334Z * [new branch] gh/kwen2501/270/head -> origin/gh/kwen2501/270/head 2025-12-04T10:14:41.2294403Z * [new branch] gh/kwen2501/270/orig -> origin/gh/kwen2501/270/orig 2025-12-04T10:14:41.2294471Z * [new branch] gh/kwen2501/271/base -> origin/gh/kwen2501/271/base 2025-12-04T10:14:41.2294538Z * [new branch] gh/kwen2501/271/head -> origin/gh/kwen2501/271/head 2025-12-04T10:14:41.2294609Z * [new branch] gh/kwen2501/271/orig -> origin/gh/kwen2501/271/orig 2025-12-04T10:14:41.2294676Z * [new branch] gh/kwen2501/274/base -> origin/gh/kwen2501/274/base 2025-12-04T10:14:41.2294744Z * [new branch] gh/kwen2501/274/head -> origin/gh/kwen2501/274/head 2025-12-04T10:14:41.2294813Z * [new branch] gh/kwen2501/274/orig -> origin/gh/kwen2501/274/orig 2025-12-04T10:14:41.2294881Z * [new branch] gh/kwen2501/275/base -> origin/gh/kwen2501/275/base 2025-12-04T10:14:41.2294950Z * [new branch] gh/kwen2501/275/head -> origin/gh/kwen2501/275/head 2025-12-04T10:14:41.2295018Z * [new branch] gh/kwen2501/275/orig -> origin/gh/kwen2501/275/orig 2025-12-04T10:14:41.2295085Z * [new branch] gh/kwen2501/276/base -> origin/gh/kwen2501/276/base 2025-12-04T10:14:41.2295154Z * [new branch] gh/kwen2501/276/head -> origin/gh/kwen2501/276/head 2025-12-04T10:14:41.2295251Z * [new branch] gh/kwen2501/276/orig -> origin/gh/kwen2501/276/orig 2025-12-04T10:14:41.2295319Z * [new branch] gh/kwen2501/277/base -> origin/gh/kwen2501/277/base 2025-12-04T10:14:41.2295387Z * [new branch] gh/kwen2501/277/head -> origin/gh/kwen2501/277/head 2025-12-04T10:14:41.2295455Z * [new branch] gh/kwen2501/277/orig -> origin/gh/kwen2501/277/orig 2025-12-04T10:14:41.2295525Z * [new branch] gh/kwen2501/278/base -> origin/gh/kwen2501/278/base 2025-12-04T10:14:41.2295594Z * [new branch] gh/kwen2501/278/head -> origin/gh/kwen2501/278/head 2025-12-04T10:14:41.2295662Z * [new branch] gh/kwen2501/278/orig -> origin/gh/kwen2501/278/orig 2025-12-04T10:14:41.2295731Z * [new branch] gh/kwen2501/279/base -> origin/gh/kwen2501/279/base 2025-12-04T10:14:41.2295799Z * [new branch] gh/kwen2501/279/head -> origin/gh/kwen2501/279/head 2025-12-04T10:14:41.2295869Z * [new branch] gh/kwen2501/279/orig -> origin/gh/kwen2501/279/orig 2025-12-04T10:14:41.2295937Z * [new branch] gh/kwen2501/280/base -> origin/gh/kwen2501/280/base 2025-12-04T10:14:41.2296006Z * [new branch] gh/kwen2501/280/head -> origin/gh/kwen2501/280/head 2025-12-04T10:14:41.2296100Z * [new branch] gh/kwen2501/280/orig -> origin/gh/kwen2501/280/orig 2025-12-04T10:14:41.2296169Z * [new branch] gh/kwen2501/281/base -> origin/gh/kwen2501/281/base 2025-12-04T10:14:41.2296238Z * [new branch] gh/kwen2501/281/head -> origin/gh/kwen2501/281/head 2025-12-04T10:14:41.2296306Z * [new branch] gh/kwen2501/281/orig -> origin/gh/kwen2501/281/orig 2025-12-04T10:14:41.2296373Z * [new branch] gh/kwen2501/282/base -> origin/gh/kwen2501/282/base 2025-12-04T10:14:41.2296442Z * [new branch] gh/kwen2501/282/head -> origin/gh/kwen2501/282/head 2025-12-04T10:14:41.2296511Z * [new branch] gh/kwen2501/282/orig -> origin/gh/kwen2501/282/orig 2025-12-04T10:14:41.2296580Z * [new branch] gh/kwen2501/283/base -> origin/gh/kwen2501/283/base 2025-12-04T10:14:41.2296648Z * [new branch] gh/kwen2501/283/head -> origin/gh/kwen2501/283/head 2025-12-04T10:14:41.2296717Z * [new branch] gh/kwen2501/283/orig -> origin/gh/kwen2501/283/orig 2025-12-04T10:14:41.2296786Z * [new branch] gh/kwen2501/284/base -> origin/gh/kwen2501/284/base 2025-12-04T10:14:41.2296854Z * [new branch] gh/kwen2501/284/head -> origin/gh/kwen2501/284/head 2025-12-04T10:14:41.2296923Z * [new branch] gh/kwen2501/284/orig -> origin/gh/kwen2501/284/orig 2025-12-04T10:14:41.2296993Z * [new branch] gh/kwen2501/285/base -> origin/gh/kwen2501/285/base 2025-12-04T10:14:41.2297061Z * [new branch] gh/kwen2501/285/head -> origin/gh/kwen2501/285/head 2025-12-04T10:14:41.2297131Z * [new branch] gh/kwen2501/285/orig -> origin/gh/kwen2501/285/orig 2025-12-04T10:14:41.2297201Z * [new branch] gh/kwen2501/286/base -> origin/gh/kwen2501/286/base 2025-12-04T10:14:41.2297270Z * [new branch] gh/kwen2501/286/head -> origin/gh/kwen2501/286/head 2025-12-04T10:14:41.2297339Z * [new branch] gh/kwen2501/286/orig -> origin/gh/kwen2501/286/orig 2025-12-04T10:14:41.2297409Z * [new branch] gh/kwen2501/287/base -> origin/gh/kwen2501/287/base 2025-12-04T10:14:41.2297477Z * [new branch] gh/kwen2501/287/head -> origin/gh/kwen2501/287/head 2025-12-04T10:14:41.2297546Z * [new branch] gh/kwen2501/287/orig -> origin/gh/kwen2501/287/orig 2025-12-04T10:14:41.2297614Z * [new branch] gh/kwen2501/288/base -> origin/gh/kwen2501/288/base 2025-12-04T10:14:41.2297683Z * [new branch] gh/kwen2501/288/head -> origin/gh/kwen2501/288/head 2025-12-04T10:14:41.2297775Z * [new branch] gh/kwen2501/288/orig -> origin/gh/kwen2501/288/orig 2025-12-04T10:14:41.2297853Z * [new branch] gh/laithsakka/251/base -> origin/gh/laithsakka/251/base 2025-12-04T10:14:41.2297927Z * [new branch] gh/laithsakka/251/head -> origin/gh/laithsakka/251/head 2025-12-04T10:14:41.2298002Z * [new branch] gh/laithsakka/251/orig -> origin/gh/laithsakka/251/orig 2025-12-04T10:14:41.2298074Z * [new branch] gh/laithsakka/276/base -> origin/gh/laithsakka/276/base 2025-12-04T10:14:41.2298147Z * [new branch] gh/laithsakka/276/head -> origin/gh/laithsakka/276/head 2025-12-04T10:14:41.2298220Z * [new branch] gh/laithsakka/276/orig -> origin/gh/laithsakka/276/orig 2025-12-04T10:14:41.2298294Z * [new branch] gh/laithsakka/28/base -> origin/gh/laithsakka/28/base 2025-12-04T10:14:41.2298368Z * [new branch] gh/laithsakka/29/base -> origin/gh/laithsakka/29/base 2025-12-04T10:14:41.2298443Z * [new branch] gh/laithsakka/30/base -> origin/gh/laithsakka/30/base 2025-12-04T10:14:41.2298515Z * [new branch] gh/laithsakka/30/head -> origin/gh/laithsakka/30/head 2025-12-04T10:14:41.2298589Z * [new branch] gh/laithsakka/31/base -> origin/gh/laithsakka/31/base 2025-12-04T10:14:41.2298689Z * [new branch] gh/laithsakka/31/head -> origin/gh/laithsakka/31/head 2025-12-04T10:14:41.2298762Z * [new branch] gh/laithsakka/313/base -> origin/gh/laithsakka/313/base 2025-12-04T10:14:41.2298835Z * [new branch] gh/laithsakka/313/head -> origin/gh/laithsakka/313/head 2025-12-04T10:14:41.2298909Z * [new branch] gh/laithsakka/313/orig -> origin/gh/laithsakka/313/orig 2025-12-04T10:14:41.2298982Z * [new branch] gh/laithsakka/316/base -> origin/gh/laithsakka/316/base 2025-12-04T10:14:41.2299057Z * [new branch] gh/laithsakka/316/head -> origin/gh/laithsakka/316/head 2025-12-04T10:14:41.2299129Z * [new branch] gh/laithsakka/316/orig -> origin/gh/laithsakka/316/orig 2025-12-04T10:14:41.2299202Z * [new branch] gh/laithsakka/317/base -> origin/gh/laithsakka/317/base 2025-12-04T10:14:41.2299274Z * [new branch] gh/laithsakka/317/head -> origin/gh/laithsakka/317/head 2025-12-04T10:14:41.2299348Z * [new branch] gh/laithsakka/317/orig -> origin/gh/laithsakka/317/orig 2025-12-04T10:14:41.2299420Z * [new branch] gh/laithsakka/319/base -> origin/gh/laithsakka/319/base 2025-12-04T10:14:41.2299494Z * [new branch] gh/laithsakka/319/head -> origin/gh/laithsakka/319/head 2025-12-04T10:14:41.2299566Z * [new branch] gh/laithsakka/319/orig -> origin/gh/laithsakka/319/orig 2025-12-04T10:14:41.2299639Z * [new branch] gh/laithsakka/32/base -> origin/gh/laithsakka/32/base 2025-12-04T10:14:41.2299714Z * [new branch] gh/laithsakka/32/head -> origin/gh/laithsakka/32/head 2025-12-04T10:14:41.2299787Z * [new branch] gh/laithsakka/320/base -> origin/gh/laithsakka/320/base 2025-12-04T10:14:41.2299860Z * [new branch] gh/laithsakka/320/head -> origin/gh/laithsakka/320/head 2025-12-04T10:14:41.2299935Z * [new branch] gh/laithsakka/320/orig -> origin/gh/laithsakka/320/orig 2025-12-04T10:14:41.2300007Z * [new branch] gh/laithsakka/321/base -> origin/gh/laithsakka/321/base 2025-12-04T10:14:41.2300079Z * [new branch] gh/laithsakka/321/head -> origin/gh/laithsakka/321/head 2025-12-04T10:14:41.2300153Z * [new branch] gh/laithsakka/321/orig -> origin/gh/laithsakka/321/orig 2025-12-04T10:14:41.2300225Z * [new branch] gh/laithsakka/322/base -> origin/gh/laithsakka/322/base 2025-12-04T10:14:41.2300297Z * [new branch] gh/laithsakka/322/head -> origin/gh/laithsakka/322/head 2025-12-04T10:14:41.2300396Z * [new branch] gh/laithsakka/322/orig -> origin/gh/laithsakka/322/orig 2025-12-04T10:14:41.2300468Z * [new branch] gh/laithsakka/323/base -> origin/gh/laithsakka/323/base 2025-12-04T10:14:41.2300541Z * [new branch] gh/laithsakka/323/head -> origin/gh/laithsakka/323/head 2025-12-04T10:14:41.2300653Z * [new branch] gh/laithsakka/323/orig -> origin/gh/laithsakka/323/orig 2025-12-04T10:14:41.2300728Z * [new branch] gh/laithsakka/324/base -> origin/gh/laithsakka/324/base 2025-12-04T10:14:41.2300800Z * [new branch] gh/laithsakka/324/head -> origin/gh/laithsakka/324/head 2025-12-04T10:14:41.2300874Z * [new branch] gh/laithsakka/324/orig -> origin/gh/laithsakka/324/orig 2025-12-04T10:14:41.2300945Z * [new branch] gh/laithsakka/325/base -> origin/gh/laithsakka/325/base 2025-12-04T10:14:41.2301017Z * [new branch] gh/laithsakka/325/head -> origin/gh/laithsakka/325/head 2025-12-04T10:14:41.2301092Z * [new branch] gh/laithsakka/325/orig -> origin/gh/laithsakka/325/orig 2025-12-04T10:14:41.2301164Z * [new branch] gh/laithsakka/326/base -> origin/gh/laithsakka/326/base 2025-12-04T10:14:41.2301238Z * [new branch] gh/laithsakka/326/head -> origin/gh/laithsakka/326/head 2025-12-04T10:14:41.2301350Z * [new branch] gh/laithsakka/326/orig -> origin/gh/laithsakka/326/orig 2025-12-04T10:14:41.2301423Z * [new branch] gh/laithsakka/327/base -> origin/gh/laithsakka/327/base 2025-12-04T10:14:41.2301498Z * [new branch] gh/laithsakka/327/head -> origin/gh/laithsakka/327/head 2025-12-04T10:14:41.2301571Z * [new branch] gh/laithsakka/327/orig -> origin/gh/laithsakka/327/orig 2025-12-04T10:14:41.2301643Z * [new branch] gh/laithsakka/328/base -> origin/gh/laithsakka/328/base 2025-12-04T10:14:41.2301719Z * [new branch] gh/laithsakka/328/head -> origin/gh/laithsakka/328/head 2025-12-04T10:14:41.2301791Z * [new branch] gh/laithsakka/328/orig -> origin/gh/laithsakka/328/orig 2025-12-04T10:14:41.2301860Z * [new branch] gh/liangel/4/base -> origin/gh/liangel/4/base 2025-12-04T10:14:41.2301930Z * [new branch] gh/liangel/4/head -> origin/gh/liangel/4/head 2025-12-04T10:14:41.2302001Z * [new branch] gh/liangel/4/orig -> origin/gh/liangel/4/orig 2025-12-04T10:14:41.2302076Z * [new branch] gh/lucaskabela/1/base -> origin/gh/lucaskabela/1/base 2025-12-04T10:14:41.2302150Z * [new branch] gh/lucaskabela/1/head -> origin/gh/lucaskabela/1/head 2025-12-04T10:14:41.2302215Z * [new branch] gh/lw/4/base -> origin/gh/lw/4/base 2025-12-04T10:14:41.2302277Z * [new branch] gh/lw/4/head -> origin/gh/lw/4/head 2025-12-04T10:14:41.2302340Z * [new branch] gh/lw/4/orig -> origin/gh/lw/4/orig 2025-12-04T10:14:41.2302402Z * [new branch] gh/lw/5/base -> origin/gh/lw/5/base 2025-12-04T10:14:41.2302463Z * [new branch] gh/lw/5/head -> origin/gh/lw/5/head 2025-12-04T10:14:41.2302525Z * [new branch] gh/lw/5/orig -> origin/gh/lw/5/orig 2025-12-04T10:14:41.2302587Z * [new branch] gh/lw/6/base -> origin/gh/lw/6/base 2025-12-04T10:14:41.2302647Z * [new branch] gh/lw/6/head -> origin/gh/lw/6/head 2025-12-04T10:14:41.2302709Z * [new branch] gh/lw/6/orig -> origin/gh/lw/6/orig 2025-12-04T10:14:41.2302777Z * [new branch] gh/malfet/14/base -> origin/gh/malfet/14/base 2025-12-04T10:14:41.2302847Z * [new branch] gh/malfet/417/base -> origin/gh/malfet/417/base 2025-12-04T10:14:41.2302916Z * [new branch] gh/malfet/417/head -> origin/gh/malfet/417/head 2025-12-04T10:14:41.2303028Z * [new branch] gh/malfet/417/orig -> origin/gh/malfet/417/orig 2025-12-04T10:14:41.2303097Z * [new branch] gh/malfet/506/base -> origin/gh/malfet/506/base 2025-12-04T10:14:41.2303163Z * [new branch] gh/malfet/506/head -> origin/gh/malfet/506/head 2025-12-04T10:14:41.2303231Z * [new branch] gh/malfet/506/orig -> origin/gh/malfet/506/orig 2025-12-04T10:14:41.2303301Z * [new branch] gh/malfet/517/base -> origin/gh/malfet/517/base 2025-12-04T10:14:41.2303367Z * [new branch] gh/malfet/517/head -> origin/gh/malfet/517/head 2025-12-04T10:14:41.2303433Z * [new branch] gh/malfet/528/base -> origin/gh/malfet/528/base 2025-12-04T10:14:41.2303501Z * [new branch] gh/malfet/528/head -> origin/gh/malfet/528/head 2025-12-04T10:14:41.2303568Z * [new branch] gh/malfet/528/orig -> origin/gh/malfet/528/orig 2025-12-04T10:14:41.2303636Z * [new branch] gh/malfet/537/base -> origin/gh/malfet/537/base 2025-12-04T10:14:41.2303704Z * [new branch] gh/malfet/537/head -> origin/gh/malfet/537/head 2025-12-04T10:14:41.2303770Z * [new branch] gh/malfet/537/orig -> origin/gh/malfet/537/orig 2025-12-04T10:14:41.2303867Z * [new branch] gh/malfet/546/base -> origin/gh/malfet/546/base 2025-12-04T10:14:41.2303936Z * [new branch] gh/malfet/546/head -> origin/gh/malfet/546/head 2025-12-04T10:14:41.2304003Z * [new branch] gh/malfet/546/orig -> origin/gh/malfet/546/orig 2025-12-04T10:14:41.2304070Z * [new branch] gh/malfet/565/base -> origin/gh/malfet/565/base 2025-12-04T10:14:41.2304138Z * [new branch] gh/malfet/565/head -> origin/gh/malfet/565/head 2025-12-04T10:14:41.2304206Z * [new branch] gh/malfet/565/orig -> origin/gh/malfet/565/orig 2025-12-04T10:14:41.2304274Z * [new branch] gh/malfet/575/base -> origin/gh/malfet/575/base 2025-12-04T10:14:41.2304342Z * [new branch] gh/malfet/575/head -> origin/gh/malfet/575/head 2025-12-04T10:14:41.2304409Z * [new branch] gh/malfet/575/orig -> origin/gh/malfet/575/orig 2025-12-04T10:14:41.2304478Z * [new branch] gh/malfet/580/base -> origin/gh/malfet/580/base 2025-12-04T10:14:41.2304545Z * [new branch] gh/malfet/580/head -> origin/gh/malfet/580/head 2025-12-04T10:14:41.2304611Z * [new branch] gh/malfet/580/orig -> origin/gh/malfet/580/orig 2025-12-04T10:14:41.2304678Z * [new branch] gh/malfet/581/base -> origin/gh/malfet/581/base 2025-12-04T10:14:41.2304745Z * [new branch] gh/malfet/581/head -> origin/gh/malfet/581/head 2025-12-04T10:14:41.2304811Z * [new branch] gh/malfet/581/orig -> origin/gh/malfet/581/orig 2025-12-04T10:14:41.2304881Z * [new branch] gh/malfet/583/base -> origin/gh/malfet/583/base 2025-12-04T10:14:41.2304947Z * [new branch] gh/malfet/583/head -> origin/gh/malfet/583/head 2025-12-04T10:14:41.2305014Z * [new branch] gh/malfet/583/orig -> origin/gh/malfet/583/orig 2025-12-04T10:14:41.2305082Z * [new branch] gh/malfet/586/base -> origin/gh/malfet/586/base 2025-12-04T10:14:41.2305149Z * [new branch] gh/malfet/586/head -> origin/gh/malfet/586/head 2025-12-04T10:14:41.2305215Z * [new branch] gh/malfet/586/orig -> origin/gh/malfet/586/orig 2025-12-04T10:14:41.2305283Z * [new branch] gh/malfet/587/base -> origin/gh/malfet/587/base 2025-12-04T10:14:41.2305349Z * [new branch] gh/malfet/587/head -> origin/gh/malfet/587/head 2025-12-04T10:14:41.2305415Z * [new branch] gh/malfet/587/orig -> origin/gh/malfet/587/orig 2025-12-04T10:14:41.2305510Z * [new branch] gh/malfet/588/base -> origin/gh/malfet/588/base 2025-12-04T10:14:41.2305577Z * [new branch] gh/malfet/588/head -> origin/gh/malfet/588/head 2025-12-04T10:14:41.2305645Z * [new branch] gh/malfet/588/orig -> origin/gh/malfet/588/orig 2025-12-04T10:14:41.2305712Z * [new branch] gh/malfet/589/base -> origin/gh/malfet/589/base 2025-12-04T10:14:41.2305780Z * [new branch] gh/malfet/589/head -> origin/gh/malfet/589/head 2025-12-04T10:14:41.2305846Z * [new branch] gh/malfet/589/orig -> origin/gh/malfet/589/orig 2025-12-04T10:14:41.2305914Z * [new branch] gh/malfet/590/base -> origin/gh/malfet/590/base 2025-12-04T10:14:41.2305981Z * [new branch] gh/malfet/590/head -> origin/gh/malfet/590/head 2025-12-04T10:14:41.2306047Z * [new branch] gh/malfet/590/orig -> origin/gh/malfet/590/orig 2025-12-04T10:14:41.2306115Z * [new branch] gh/malfet/591/base -> origin/gh/malfet/591/base 2025-12-04T10:14:41.2306182Z * [new branch] gh/malfet/591/head -> origin/gh/malfet/591/head 2025-12-04T10:14:41.2306249Z * [new branch] gh/malfet/591/orig -> origin/gh/malfet/591/orig 2025-12-04T10:14:41.2306315Z * [new branch] gh/malfet/592/base -> origin/gh/malfet/592/base 2025-12-04T10:14:41.2306407Z * [new branch] gh/malfet/592/head -> origin/gh/malfet/592/head 2025-12-04T10:14:41.2306476Z * [new branch] gh/malfet/592/orig -> origin/gh/malfet/592/orig 2025-12-04T10:14:41.2306543Z * [new branch] gh/malfet/593/base -> origin/gh/malfet/593/base 2025-12-04T10:14:41.2306609Z * [new branch] gh/malfet/593/head -> origin/gh/malfet/593/head 2025-12-04T10:14:41.2306677Z * [new branch] gh/malfet/593/orig -> origin/gh/malfet/593/orig 2025-12-04T10:14:41.2306745Z * [new branch] gh/malfet/594/base -> origin/gh/malfet/594/base 2025-12-04T10:14:41.2306811Z * [new branch] gh/malfet/594/head -> origin/gh/malfet/594/head 2025-12-04T10:14:41.2306879Z * [new branch] gh/malfet/594/orig -> origin/gh/malfet/594/orig 2025-12-04T10:14:41.2306945Z * [new branch] gh/malfet/595/base -> origin/gh/malfet/595/base 2025-12-04T10:14:41.2307014Z * [new branch] gh/malfet/595/head -> origin/gh/malfet/595/head 2025-12-04T10:14:41.2307082Z * [new branch] gh/malfet/595/orig -> origin/gh/malfet/595/orig 2025-12-04T10:14:41.2307148Z * [new branch] gh/malfet/596/base -> origin/gh/malfet/596/base 2025-12-04T10:14:41.2307215Z * [new branch] gh/malfet/596/head -> origin/gh/malfet/596/head 2025-12-04T10:14:41.2307282Z * [new branch] gh/malfet/596/orig -> origin/gh/malfet/596/orig 2025-12-04T10:14:41.2307355Z * [new branch] gh/malfet/597/base -> origin/gh/malfet/597/base 2025-12-04T10:14:41.2307426Z * [new branch] gh/malfet/597/head -> origin/gh/malfet/597/head 2025-12-04T10:14:41.2307498Z * [new branch] gh/malfet/597/orig -> origin/gh/malfet/597/orig 2025-12-04T10:14:41.2307564Z * [new branch] gh/malfet/598/base -> origin/gh/malfet/598/base 2025-12-04T10:14:41.2307633Z * [new branch] gh/malfet/598/head -> origin/gh/malfet/598/head 2025-12-04T10:14:41.2307701Z * [new branch] gh/malfet/598/orig -> origin/gh/malfet/598/orig 2025-12-04T10:14:41.2307767Z * [new branch] gh/malfet/599/base -> origin/gh/malfet/599/base 2025-12-04T10:14:41.2307835Z * [new branch] gh/malfet/599/head -> origin/gh/malfet/599/head 2025-12-04T10:14:41.2307902Z * [new branch] gh/malfet/599/orig -> origin/gh/malfet/599/orig 2025-12-04T10:14:41.2307995Z * [new branch] gh/malfet/600/base -> origin/gh/malfet/600/base 2025-12-04T10:14:41.2308063Z * [new branch] gh/malfet/600/head -> origin/gh/malfet/600/head 2025-12-04T10:14:41.2308130Z * [new branch] gh/malfet/600/orig -> origin/gh/malfet/600/orig 2025-12-04T10:14:41.2308197Z * [new branch] gh/malfet/601/base -> origin/gh/malfet/601/base 2025-12-04T10:14:41.2308265Z * [new branch] gh/malfet/601/head -> origin/gh/malfet/601/head 2025-12-04T10:14:41.2308332Z * [new branch] gh/malfet/601/orig -> origin/gh/malfet/601/orig 2025-12-04T10:14:41.2308399Z * [new branch] gh/malfet/602/base -> origin/gh/malfet/602/base 2025-12-04T10:14:41.2308468Z * [new branch] gh/malfet/602/head -> origin/gh/malfet/602/head 2025-12-04T10:14:41.2308534Z * [new branch] gh/malfet/602/orig -> origin/gh/malfet/602/orig 2025-12-04T10:14:41.2308602Z * [new branch] gh/malfet/603/base -> origin/gh/malfet/603/base 2025-12-04T10:14:41.2308669Z * [new branch] gh/malfet/603/head -> origin/gh/malfet/603/head 2025-12-04T10:14:41.2308737Z * [new branch] gh/malfet/603/orig -> origin/gh/malfet/603/orig 2025-12-04T10:14:41.2308804Z * [new branch] gh/malfet/604/base -> origin/gh/malfet/604/base 2025-12-04T10:14:41.2308898Z * [new branch] gh/malfet/604/head -> origin/gh/malfet/604/head 2025-12-04T10:14:41.2308965Z * [new branch] gh/malfet/604/orig -> origin/gh/malfet/604/orig 2025-12-04T10:14:41.2309033Z * [new branch] gh/malfet/605/base -> origin/gh/malfet/605/base 2025-12-04T10:14:41.2309101Z * [new branch] gh/malfet/605/head -> origin/gh/malfet/605/head 2025-12-04T10:14:41.2309168Z * [new branch] gh/malfet/605/orig -> origin/gh/malfet/605/orig 2025-12-04T10:14:41.2309237Z * [new branch] gh/malfet/606/base -> origin/gh/malfet/606/base 2025-12-04T10:14:41.2309305Z * [new branch] gh/malfet/606/head -> origin/gh/malfet/606/head 2025-12-04T10:14:41.2309372Z * [new branch] gh/malfet/606/orig -> origin/gh/malfet/606/orig 2025-12-04T10:14:41.2309441Z * [new branch] gh/malfet/607/base -> origin/gh/malfet/607/base 2025-12-04T10:14:41.2309510Z * [new branch] gh/malfet/607/head -> origin/gh/malfet/607/head 2025-12-04T10:14:41.2309578Z * [new branch] gh/malfet/607/orig -> origin/gh/malfet/607/orig 2025-12-04T10:14:41.2309646Z * [new branch] gh/malfet/608/base -> origin/gh/malfet/608/base 2025-12-04T10:14:41.2309713Z * [new branch] gh/malfet/608/head -> origin/gh/malfet/608/head 2025-12-04T10:14:41.2309779Z * [new branch] gh/malfet/608/orig -> origin/gh/malfet/608/orig 2025-12-04T10:14:41.2309847Z * [new branch] gh/malfet/609/base -> origin/gh/malfet/609/base 2025-12-04T10:14:41.2309916Z * [new branch] gh/malfet/609/head -> origin/gh/malfet/609/head 2025-12-04T10:14:41.2309982Z * [new branch] gh/malfet/609/orig -> origin/gh/malfet/609/orig 2025-12-04T10:14:41.2310050Z * [new branch] gh/malfet/610/base -> origin/gh/malfet/610/base 2025-12-04T10:14:41.2310117Z * [new branch] gh/malfet/610/head -> origin/gh/malfet/610/head 2025-12-04T10:14:41.2310184Z * [new branch] gh/malfet/610/orig -> origin/gh/malfet/610/orig 2025-12-04T10:14:41.2310252Z * [new branch] gh/malfet/611/base -> origin/gh/malfet/611/base 2025-12-04T10:14:41.2310319Z * [new branch] gh/malfet/611/head -> origin/gh/malfet/611/head 2025-12-04T10:14:41.2310385Z * [new branch] gh/malfet/611/orig -> origin/gh/malfet/611/orig 2025-12-04T10:14:41.2310453Z * [new branch] gh/malfet/612/base -> origin/gh/malfet/612/base 2025-12-04T10:14:41.2310546Z * [new branch] gh/malfet/612/head -> origin/gh/malfet/612/head 2025-12-04T10:14:41.2310650Z * [new branch] gh/malfet/612/orig -> origin/gh/malfet/612/orig 2025-12-04T10:14:41.2310721Z * [new branch] gh/malfet/64/base -> origin/gh/malfet/64/base 2025-12-04T10:14:41.2310790Z * [new branch] gh/malfet/64/head -> origin/gh/malfet/64/head 2025-12-04T10:14:41.2310879Z * [new branch] gh/manuelcandales/11/base -> origin/gh/manuelcandales/11/base 2025-12-04T10:14:41.2310964Z * [new branch] gh/manuelcandales/11/head -> origin/gh/manuelcandales/11/head 2025-12-04T10:14:41.2311047Z * [new branch] gh/manuelcandales/11/orig -> origin/gh/manuelcandales/11/orig 2025-12-04T10:14:41.2311116Z * [new branch] gh/markkm/1/base -> origin/gh/markkm/1/base 2025-12-04T10:14:41.2311192Z * [new branch] gh/masnesral/1/base -> origin/gh/masnesral/1/base 2025-12-04T10:14:41.2311264Z * [new branch] gh/masnesral/1/head -> origin/gh/masnesral/1/head 2025-12-04T10:14:41.2311336Z * [new branch] gh/masnesral/1/orig -> origin/gh/masnesral/1/orig 2025-12-04T10:14:41.2311405Z * [new branch] gh/mhorowitz/0/base -> origin/gh/mhorowitz/0/base 2025-12-04T10:14:41.2311530Z * [new branch] gh/mhorowitz/0/head -> origin/gh/mhorowitz/0/head 2025-12-04T10:14:41.2311600Z * [new branch] gh/mhorowitz/1/base -> origin/gh/mhorowitz/1/base 2025-12-04T10:14:41.2311668Z * [new branch] gh/mhorowitz/1/head -> origin/gh/mhorowitz/1/head 2025-12-04T10:14:41.2311736Z * [new branch] gh/mhorowitz/2/base -> origin/gh/mhorowitz/2/base 2025-12-04T10:14:41.2311806Z * [new branch] gh/mhorowitz/2/head -> origin/gh/mhorowitz/2/head 2025-12-04T10:14:41.2311876Z * [new branch] gh/mhorowitz/3/base -> origin/gh/mhorowitz/3/base 2025-12-04T10:14:41.2311946Z * [new branch] gh/mhorowitz/3/head -> origin/gh/mhorowitz/3/head 2025-12-04T10:14:41.2312016Z * [new branch] gh/mhorowitz/4/base -> origin/gh/mhorowitz/4/base 2025-12-04T10:14:41.2312085Z * [new branch] gh/mhorowitz/4/head -> origin/gh/mhorowitz/4/head 2025-12-04T10:14:41.2312155Z * [new branch] gh/mhorowitz/5/base -> origin/gh/mhorowitz/5/base 2025-12-04T10:14:41.2312226Z * [new branch] gh/mhorowitz/5/head -> origin/gh/mhorowitz/5/head 2025-12-04T10:14:41.2312295Z * [new branch] gh/mhorowitz/6/base -> origin/gh/mhorowitz/6/base 2025-12-04T10:14:41.2312364Z * [new branch] gh/mhorowitz/6/head -> origin/gh/mhorowitz/6/head 2025-12-04T10:14:41.2312465Z * [new branch] gh/mikaylagawarecki/234/base -> origin/gh/mikaylagawarecki/234/base 2025-12-04T10:14:41.2312562Z * [new branch] gh/mikaylagawarecki/234/head -> origin/gh/mikaylagawarecki/234/head 2025-12-04T10:14:41.2312656Z * [new branch] gh/mikaylagawarecki/235/base -> origin/gh/mikaylagawarecki/235/base 2025-12-04T10:14:41.2312751Z * [new branch] gh/mikaylagawarecki/235/head -> origin/gh/mikaylagawarecki/235/head 2025-12-04T10:14:41.2312845Z * [new branch] gh/mikaylagawarecki/236/base -> origin/gh/mikaylagawarecki/236/base 2025-12-04T10:14:41.2312936Z * [new branch] gh/mikaylagawarecki/236/head -> origin/gh/mikaylagawarecki/236/head 2025-12-04T10:14:41.2313026Z * [new branch] gh/mikaylagawarecki/237/base -> origin/gh/mikaylagawarecki/237/base 2025-12-04T10:14:41.2313115Z * [new branch] gh/mikaylagawarecki/237/head -> origin/gh/mikaylagawarecki/237/head 2025-12-04T10:14:41.2313207Z * [new branch] gh/mikaylagawarecki/238/base -> origin/gh/mikaylagawarecki/238/base 2025-12-04T10:14:41.2313336Z * [new branch] gh/mikaylagawarecki/238/head -> origin/gh/mikaylagawarecki/238/head 2025-12-04T10:14:41.2313426Z * [new branch] gh/mikaylagawarecki/336/base -> origin/gh/mikaylagawarecki/336/base 2025-12-04T10:14:41.2313516Z * [new branch] gh/mikaylagawarecki/336/head -> origin/gh/mikaylagawarecki/336/head 2025-12-04T10:14:41.2313608Z * [new branch] gh/mikaylagawarecki/336/orig -> origin/gh/mikaylagawarecki/336/orig 2025-12-04T10:14:41.2313698Z * [new branch] gh/mikaylagawarecki/341/base -> origin/gh/mikaylagawarecki/341/base 2025-12-04T10:14:41.2313789Z * [new branch] gh/mikaylagawarecki/341/head -> origin/gh/mikaylagawarecki/341/head 2025-12-04T10:14:41.2313879Z * [new branch] gh/mikaylagawarecki/341/orig -> origin/gh/mikaylagawarecki/341/orig 2025-12-04T10:14:41.2313969Z * [new branch] gh/mikaylagawarecki/342/base -> origin/gh/mikaylagawarecki/342/base 2025-12-04T10:14:41.2314061Z * [new branch] gh/mikaylagawarecki/342/head -> origin/gh/mikaylagawarecki/342/head 2025-12-04T10:14:41.2314150Z * [new branch] gh/mikaylagawarecki/342/orig -> origin/gh/mikaylagawarecki/342/orig 2025-12-04T10:14:41.2314242Z * [new branch] gh/mikaylagawarecki/345/base -> origin/gh/mikaylagawarecki/345/base 2025-12-04T10:14:41.2314360Z * [new branch] gh/mikaylagawarecki/345/head -> origin/gh/mikaylagawarecki/345/head 2025-12-04T10:14:41.2314451Z * [new branch] gh/mikaylagawarecki/345/orig -> origin/gh/mikaylagawarecki/345/orig 2025-12-04T10:14:41.2314542Z * [new branch] gh/mikaylagawarecki/346/base -> origin/gh/mikaylagawarecki/346/base 2025-12-04T10:14:41.2314632Z * [new branch] gh/mikaylagawarecki/346/head -> origin/gh/mikaylagawarecki/346/head 2025-12-04T10:14:41.2314723Z * [new branch] gh/mikaylagawarecki/346/orig -> origin/gh/mikaylagawarecki/346/orig 2025-12-04T10:14:41.2314815Z * [new branch] gh/mikaylagawarecki/347/base -> origin/gh/mikaylagawarecki/347/base 2025-12-04T10:14:41.2314907Z * [new branch] gh/mikaylagawarecki/347/head -> origin/gh/mikaylagawarecki/347/head 2025-12-04T10:14:41.2314997Z * [new branch] gh/mikaylagawarecki/347/orig -> origin/gh/mikaylagawarecki/347/orig 2025-12-04T10:14:41.2315091Z * [new branch] gh/mikaylagawarecki/350/base -> origin/gh/mikaylagawarecki/350/base 2025-12-04T10:14:41.2315181Z * [new branch] gh/mikaylagawarecki/350/head -> origin/gh/mikaylagawarecki/350/head 2025-12-04T10:14:41.2315271Z * [new branch] gh/mikaylagawarecki/350/orig -> origin/gh/mikaylagawarecki/350/orig 2025-12-04T10:14:41.2315362Z * [new branch] gh/mikaylagawarecki/351/base -> origin/gh/mikaylagawarecki/351/base 2025-12-04T10:14:41.2315452Z * [new branch] gh/mikaylagawarecki/351/head -> origin/gh/mikaylagawarecki/351/head 2025-12-04T10:14:41.2315545Z * [new branch] gh/mikaylagawarecki/351/orig -> origin/gh/mikaylagawarecki/351/orig 2025-12-04T10:14:41.2315634Z * [new branch] gh/mikaylagawarecki/352/base -> origin/gh/mikaylagawarecki/352/base 2025-12-04T10:14:41.2315724Z * [new branch] gh/mikaylagawarecki/352/head -> origin/gh/mikaylagawarecki/352/head 2025-12-04T10:14:41.2315816Z * [new branch] gh/mikaylagawarecki/352/orig -> origin/gh/mikaylagawarecki/352/orig 2025-12-04T10:14:41.2315906Z * [new branch] gh/mikaylagawarecki/353/base -> origin/gh/mikaylagawarecki/353/base 2025-12-04T10:14:41.2315996Z * [new branch] gh/mikaylagawarecki/353/head -> origin/gh/mikaylagawarecki/353/head 2025-12-04T10:14:41.2316088Z * [new branch] gh/mikaylagawarecki/353/orig -> origin/gh/mikaylagawarecki/353/orig 2025-12-04T10:14:41.2316177Z * [new branch] gh/mikaylagawarecki/354/base -> origin/gh/mikaylagawarecki/354/base 2025-12-04T10:14:41.2316294Z * [new branch] gh/mikaylagawarecki/354/head -> origin/gh/mikaylagawarecki/354/head 2025-12-04T10:14:41.2316384Z * [new branch] gh/mikaylagawarecki/354/orig -> origin/gh/mikaylagawarecki/354/orig 2025-12-04T10:14:41.2316474Z * [new branch] gh/mikaylagawarecki/356/base -> origin/gh/mikaylagawarecki/356/base 2025-12-04T10:14:41.2316567Z * [new branch] gh/mikaylagawarecki/356/head -> origin/gh/mikaylagawarecki/356/head 2025-12-04T10:14:41.2316658Z * [new branch] gh/mikaylagawarecki/356/orig -> origin/gh/mikaylagawarecki/356/orig 2025-12-04T10:14:41.2316748Z * [new branch] gh/mikaylagawarecki/357/base -> origin/gh/mikaylagawarecki/357/base 2025-12-04T10:14:41.2316837Z * [new branch] gh/mikaylagawarecki/357/head -> origin/gh/mikaylagawarecki/357/head 2025-12-04T10:14:41.2316927Z * [new branch] gh/mikaylagawarecki/357/orig -> origin/gh/mikaylagawarecki/357/orig 2025-12-04T10:14:41.2317018Z * [new branch] gh/mikaylagawarecki/359/base -> origin/gh/mikaylagawarecki/359/base 2025-12-04T10:14:41.2317109Z * [new branch] gh/mikaylagawarecki/359/head -> origin/gh/mikaylagawarecki/359/head 2025-12-04T10:14:41.2317199Z * [new branch] gh/mikaylagawarecki/359/orig -> origin/gh/mikaylagawarecki/359/orig 2025-12-04T10:14:41.2317317Z * [new branch] gh/mikaylagawarecki/360/base -> origin/gh/mikaylagawarecki/360/base 2025-12-04T10:14:41.2317409Z * [new branch] gh/mikaylagawarecki/360/head -> origin/gh/mikaylagawarecki/360/head 2025-12-04T10:14:41.2317499Z * [new branch] gh/mikaylagawarecki/360/orig -> origin/gh/mikaylagawarecki/360/orig 2025-12-04T10:14:41.2317590Z * [new branch] gh/mikaylagawarecki/361/base -> origin/gh/mikaylagawarecki/361/base 2025-12-04T10:14:41.2317681Z * [new branch] gh/mikaylagawarecki/361/head -> origin/gh/mikaylagawarecki/361/head 2025-12-04T10:14:41.2317772Z * [new branch] gh/mikaylagawarecki/361/orig -> origin/gh/mikaylagawarecki/361/orig 2025-12-04T10:14:41.2317861Z * [new branch] gh/mikaylagawarecki/362/base -> origin/gh/mikaylagawarecki/362/base 2025-12-04T10:14:41.2317952Z * [new branch] gh/mikaylagawarecki/362/head -> origin/gh/mikaylagawarecki/362/head 2025-12-04T10:14:41.2318043Z * [new branch] gh/mikaylagawarecki/362/orig -> origin/gh/mikaylagawarecki/362/orig 2025-12-04T10:14:41.2318134Z * [new branch] gh/mikaylagawarecki/363/base -> origin/gh/mikaylagawarecki/363/base 2025-12-04T10:14:41.2318225Z * [new branch] gh/mikaylagawarecki/363/head -> origin/gh/mikaylagawarecki/363/head 2025-12-04T10:14:41.2318314Z * [new branch] gh/mikaylagawarecki/363/orig -> origin/gh/mikaylagawarecki/363/orig 2025-12-04T10:14:41.2318405Z * [new branch] gh/mikaylagawarecki/364/base -> origin/gh/mikaylagawarecki/364/base 2025-12-04T10:14:41.2318497Z * [new branch] gh/mikaylagawarecki/364/head -> origin/gh/mikaylagawarecki/364/head 2025-12-04T10:14:41.2318587Z * [new branch] gh/mikaylagawarecki/364/orig -> origin/gh/mikaylagawarecki/364/orig 2025-12-04T10:14:41.2318677Z * [new branch] gh/mikaylagawarecki/365/base -> origin/gh/mikaylagawarecki/365/base 2025-12-04T10:14:41.2318768Z * [new branch] gh/mikaylagawarecki/365/head -> origin/gh/mikaylagawarecki/365/head 2025-12-04T10:14:41.2318858Z * [new branch] gh/mikaylagawarecki/365/orig -> origin/gh/mikaylagawarecki/365/orig 2025-12-04T10:14:41.2318949Z * [new branch] gh/mikaylagawarecki/366/base -> origin/gh/mikaylagawarecki/366/base 2025-12-04T10:14:41.2319038Z * [new branch] gh/mikaylagawarecki/366/head -> origin/gh/mikaylagawarecki/366/head 2025-12-04T10:14:41.2319128Z * [new branch] gh/mikaylagawarecki/366/orig -> origin/gh/mikaylagawarecki/366/orig 2025-12-04T10:14:41.2319249Z * [new branch] gh/mikaylagawarecki/367/base -> origin/gh/mikaylagawarecki/367/base 2025-12-04T10:14:41.2319339Z * [new branch] gh/mikaylagawarecki/367/head -> origin/gh/mikaylagawarecki/367/head 2025-12-04T10:14:41.2319429Z * [new branch] gh/mikaylagawarecki/367/orig -> origin/gh/mikaylagawarecki/367/orig 2025-12-04T10:14:41.2319522Z * [new branch] gh/mikaylagawarecki/368/base -> origin/gh/mikaylagawarecki/368/base 2025-12-04T10:14:41.2319613Z * [new branch] gh/mikaylagawarecki/368/head -> origin/gh/mikaylagawarecki/368/head 2025-12-04T10:14:41.2319704Z * [new branch] gh/mikaylagawarecki/368/orig -> origin/gh/mikaylagawarecki/368/orig 2025-12-04T10:14:41.2319796Z * [new branch] gh/mikaylagawarecki/369/base -> origin/gh/mikaylagawarecki/369/base 2025-12-04T10:14:41.2319885Z * [new branch] gh/mikaylagawarecki/369/head -> origin/gh/mikaylagawarecki/369/head 2025-12-04T10:14:41.2319977Z * [new branch] gh/mikaylagawarecki/369/orig -> origin/gh/mikaylagawarecki/369/orig 2025-12-04T10:14:41.2320067Z * [new branch] gh/mikaylagawarecki/370/base -> origin/gh/mikaylagawarecki/370/base 2025-12-04T10:14:41.2320158Z * [new branch] gh/mikaylagawarecki/370/head -> origin/gh/mikaylagawarecki/370/head 2025-12-04T10:14:41.2320277Z * [new branch] gh/mikaylagawarecki/370/orig -> origin/gh/mikaylagawarecki/370/orig 2025-12-04T10:14:41.2320367Z * [new branch] gh/mikaylagawarecki/371/base -> origin/gh/mikaylagawarecki/371/base 2025-12-04T10:14:41.2320457Z * [new branch] gh/mikaylagawarecki/371/head -> origin/gh/mikaylagawarecki/371/head 2025-12-04T10:14:41.2320547Z * [new branch] gh/mikaylagawarecki/371/orig -> origin/gh/mikaylagawarecki/371/orig 2025-12-04T10:14:41.2320681Z * [new branch] gh/mikaylagawarecki/372/base -> origin/gh/mikaylagawarecki/372/base 2025-12-04T10:14:41.2320775Z * [new branch] gh/mikaylagawarecki/372/head -> origin/gh/mikaylagawarecki/372/head 2025-12-04T10:14:41.2320867Z * [new branch] gh/mikaylagawarecki/372/orig -> origin/gh/mikaylagawarecki/372/orig 2025-12-04T10:14:41.2320958Z * [new branch] gh/mikaylagawarecki/373/base -> origin/gh/mikaylagawarecki/373/base 2025-12-04T10:14:41.2321049Z * [new branch] gh/mikaylagawarecki/373/head -> origin/gh/mikaylagawarecki/373/head 2025-12-04T10:14:41.2321140Z * [new branch] gh/mikaylagawarecki/373/orig -> origin/gh/mikaylagawarecki/373/orig 2025-12-04T10:14:41.2321230Z * [new branch] gh/mikaylagawarecki/374/base -> origin/gh/mikaylagawarecki/374/base 2025-12-04T10:14:41.2321323Z * [new branch] gh/mikaylagawarecki/374/head -> origin/gh/mikaylagawarecki/374/head 2025-12-04T10:14:41.2321413Z * [new branch] gh/mikaylagawarecki/374/orig -> origin/gh/mikaylagawarecki/374/orig 2025-12-04T10:14:41.2321505Z * [new branch] gh/mikaylagawarecki/375/base -> origin/gh/mikaylagawarecki/375/base 2025-12-04T10:14:41.2321596Z * [new branch] gh/mikaylagawarecki/375/head -> origin/gh/mikaylagawarecki/375/head 2025-12-04T10:14:41.2321687Z * [new branch] gh/mikaylagawarecki/375/orig -> origin/gh/mikaylagawarecki/375/orig 2025-12-04T10:14:41.2321778Z * [new branch] gh/mikaylagawarecki/376/base -> origin/gh/mikaylagawarecki/376/base 2025-12-04T10:14:41.2321872Z * [new branch] gh/mikaylagawarecki/376/head -> origin/gh/mikaylagawarecki/376/head 2025-12-04T10:14:41.2321962Z * [new branch] gh/mikaylagawarecki/376/orig -> origin/gh/mikaylagawarecki/376/orig 2025-12-04T10:14:41.2322053Z * [new branch] gh/mikaylagawarecki/377/base -> origin/gh/mikaylagawarecki/377/base 2025-12-04T10:14:41.2322145Z * [new branch] gh/mikaylagawarecki/377/head -> origin/gh/mikaylagawarecki/377/head 2025-12-04T10:14:41.2322279Z * [new branch] gh/mikaylagawarecki/377/orig -> origin/gh/mikaylagawarecki/377/orig 2025-12-04T10:14:41.2322370Z * [new branch] gh/mikaylagawarecki/378/base -> origin/gh/mikaylagawarecki/378/base 2025-12-04T10:14:41.2322460Z * [new branch] gh/mikaylagawarecki/378/head -> origin/gh/mikaylagawarecki/378/head 2025-12-04T10:14:41.2322552Z * [new branch] gh/mikaylagawarecki/378/orig -> origin/gh/mikaylagawarecki/378/orig 2025-12-04T10:14:41.2322641Z * [new branch] gh/mikaylagawarecki/379/base -> origin/gh/mikaylagawarecki/379/base 2025-12-04T10:14:41.2322732Z * [new branch] gh/mikaylagawarecki/379/head -> origin/gh/mikaylagawarecki/379/head 2025-12-04T10:14:41.2322822Z * [new branch] gh/mikaylagawarecki/379/orig -> origin/gh/mikaylagawarecki/379/orig 2025-12-04T10:14:41.2322911Z * [new branch] gh/mikaylagawarecki/380/base -> origin/gh/mikaylagawarecki/380/base 2025-12-04T10:14:41.2323002Z * [new branch] gh/mikaylagawarecki/380/head -> origin/gh/mikaylagawarecki/380/head 2025-12-04T10:14:41.2323092Z * [new branch] gh/mikaylagawarecki/380/orig -> origin/gh/mikaylagawarecki/380/orig 2025-12-04T10:14:41.2323185Z * [new branch] gh/mikaylagawarecki/381/base -> origin/gh/mikaylagawarecki/381/base 2025-12-04T10:14:41.2323324Z * [new branch] gh/mikaylagawarecki/381/head -> origin/gh/mikaylagawarecki/381/head 2025-12-04T10:14:41.2323414Z * [new branch] gh/mikaylagawarecki/381/orig -> origin/gh/mikaylagawarecki/381/orig 2025-12-04T10:14:41.2323505Z * [new branch] gh/mikaylagawarecki/382/base -> origin/gh/mikaylagawarecki/382/base 2025-12-04T10:14:41.2323595Z * [new branch] gh/mikaylagawarecki/382/head -> origin/gh/mikaylagawarecki/382/head 2025-12-04T10:14:41.2323685Z * [new branch] gh/mikaylagawarecki/382/orig -> origin/gh/mikaylagawarecki/382/orig 2025-12-04T10:14:41.2323777Z * [new branch] gh/mikaylagawarecki/383/base -> origin/gh/mikaylagawarecki/383/base 2025-12-04T10:14:41.2323867Z * [new branch] gh/mikaylagawarecki/383/head -> origin/gh/mikaylagawarecki/383/head 2025-12-04T10:14:41.2323957Z * [new branch] gh/mikaylagawarecki/383/orig -> origin/gh/mikaylagawarecki/383/orig 2025-12-04T10:14:41.2324051Z * [new branch] gh/mikaylagawarecki/384/base -> origin/gh/mikaylagawarecki/384/base 2025-12-04T10:14:41.2324142Z * [new branch] gh/mikaylagawarecki/384/head -> origin/gh/mikaylagawarecki/384/head 2025-12-04T10:14:41.2324234Z * [new branch] gh/mikaylagawarecki/384/orig -> origin/gh/mikaylagawarecki/384/orig 2025-12-04T10:14:41.2324325Z * [new branch] gh/mikaylagawarecki/385/base -> origin/gh/mikaylagawarecki/385/base 2025-12-04T10:14:41.2324415Z * [new branch] gh/mikaylagawarecki/385/head -> origin/gh/mikaylagawarecki/385/head 2025-12-04T10:14:41.2324508Z * [new branch] gh/mikaylagawarecki/385/orig -> origin/gh/mikaylagawarecki/385/orig 2025-12-04T10:14:41.2324598Z * [new branch] gh/mikaylagawarecki/386/base -> origin/gh/mikaylagawarecki/386/base 2025-12-04T10:14:41.2324688Z * [new branch] gh/mikaylagawarecki/386/head -> origin/gh/mikaylagawarecki/386/head 2025-12-04T10:14:41.2324779Z * [new branch] gh/mikaylagawarecki/386/orig -> origin/gh/mikaylagawarecki/386/orig 2025-12-04T10:14:41.2324869Z * [new branch] gh/mikaylagawarecki/387/base -> origin/gh/mikaylagawarecki/387/base 2025-12-04T10:14:41.2324959Z * [new branch] gh/mikaylagawarecki/387/head -> origin/gh/mikaylagawarecki/387/head 2025-12-04T10:14:41.2325050Z * [new branch] gh/mikaylagawarecki/387/orig -> origin/gh/mikaylagawarecki/387/orig 2025-12-04T10:14:41.2325140Z * [new branch] gh/mikaylagawarecki/388/base -> origin/gh/mikaylagawarecki/388/base 2025-12-04T10:14:41.2325255Z * [new branch] gh/mikaylagawarecki/388/head -> origin/gh/mikaylagawarecki/388/head 2025-12-04T10:14:41.2325346Z * [new branch] gh/mikaylagawarecki/388/orig -> origin/gh/mikaylagawarecki/388/orig 2025-12-04T10:14:41.2325436Z * [new branch] gh/mikaylagawarecki/389/base -> origin/gh/mikaylagawarecki/389/base 2025-12-04T10:14:41.2325530Z * [new branch] gh/mikaylagawarecki/389/head -> origin/gh/mikaylagawarecki/389/head 2025-12-04T10:14:41.2325622Z * [new branch] gh/mikaylagawarecki/389/orig -> origin/gh/mikaylagawarecki/389/orig 2025-12-04T10:14:41.2325712Z * [new branch] gh/mikaylagawarecki/390/base -> origin/gh/mikaylagawarecki/390/base 2025-12-04T10:14:41.2325803Z * [new branch] gh/mikaylagawarecki/390/head -> origin/gh/mikaylagawarecki/390/head 2025-12-04T10:14:41.2325893Z * [new branch] gh/mikaylagawarecki/390/orig -> origin/gh/mikaylagawarecki/390/orig 2025-12-04T10:14:41.2325984Z * [new branch] gh/mikaylagawarecki/391/base -> origin/gh/mikaylagawarecki/391/base 2025-12-04T10:14:41.2326075Z * [new branch] gh/mikaylagawarecki/391/head -> origin/gh/mikaylagawarecki/391/head 2025-12-04T10:14:41.2326165Z * [new branch] gh/mikaylagawarecki/391/orig -> origin/gh/mikaylagawarecki/391/orig 2025-12-04T10:14:41.2326285Z * [new branch] gh/mikaylagawarecki/392/base -> origin/gh/mikaylagawarecki/392/base 2025-12-04T10:14:41.2326376Z * [new branch] gh/mikaylagawarecki/392/head -> origin/gh/mikaylagawarecki/392/head 2025-12-04T10:14:41.2326465Z * [new branch] gh/mikaylagawarecki/392/orig -> origin/gh/mikaylagawarecki/392/orig 2025-12-04T10:14:41.2326536Z * [new branch] gh/mlazos/41/base -> origin/gh/mlazos/41/base 2025-12-04T10:14:41.2326606Z * [new branch] gh/mlazos/41/head -> origin/gh/mlazos/41/head 2025-12-04T10:14:41.2326673Z * [new branch] gh/mlazos/41/orig -> origin/gh/mlazos/41/orig 2025-12-04T10:14:41.2326741Z * [new branch] gh/mlazos/42/base -> origin/gh/mlazos/42/base 2025-12-04T10:14:41.2326809Z * [new branch] gh/mlazos/42/head -> origin/gh/mlazos/42/head 2025-12-04T10:14:41.2326876Z * [new branch] gh/mlazos/42/orig -> origin/gh/mlazos/42/orig 2025-12-04T10:14:41.2326943Z * [new branch] gh/mlazos/43/base -> origin/gh/mlazos/43/base 2025-12-04T10:14:41.2327009Z * [new branch] gh/mlazos/43/head -> origin/gh/mlazos/43/head 2025-12-04T10:14:41.2327075Z * [new branch] gh/mlazos/43/orig -> origin/gh/mlazos/43/orig 2025-12-04T10:14:41.2327142Z * [new branch] gh/mlazos/44/base -> origin/gh/mlazos/44/base 2025-12-04T10:14:41.2327209Z * [new branch] gh/mlazos/44/head -> origin/gh/mlazos/44/head 2025-12-04T10:14:41.2327275Z * [new branch] gh/mlazos/44/orig -> origin/gh/mlazos/44/orig 2025-12-04T10:14:41.2327343Z * [new branch] gh/mlazos/47/base -> origin/gh/mlazos/47/base 2025-12-04T10:14:41.2327408Z * [new branch] gh/mlazos/47/head -> origin/gh/mlazos/47/head 2025-12-04T10:14:41.2327474Z * [new branch] gh/mlazos/47/orig -> origin/gh/mlazos/47/orig 2025-12-04T10:14:41.2327542Z * [new branch] gh/mlazos/48/base -> origin/gh/mlazos/48/base 2025-12-04T10:14:41.2327607Z * [new branch] gh/mlazos/48/head -> origin/gh/mlazos/48/head 2025-12-04T10:14:41.2327673Z * [new branch] gh/mlazos/48/orig -> origin/gh/mlazos/48/orig 2025-12-04T10:14:41.2327739Z * [new branch] gh/mlazos/49/base -> origin/gh/mlazos/49/base 2025-12-04T10:14:41.2327804Z * [new branch] gh/mlazos/49/head -> origin/gh/mlazos/49/head 2025-12-04T10:14:41.2327870Z * [new branch] gh/mlazos/49/orig -> origin/gh/mlazos/49/orig 2025-12-04T10:14:41.2327966Z * [new branch] gh/mlazos/50/base -> origin/gh/mlazos/50/base 2025-12-04T10:14:41.2328032Z * [new branch] gh/mlazos/50/head -> origin/gh/mlazos/50/head 2025-12-04T10:14:41.2328098Z * [new branch] gh/mlazos/50/orig -> origin/gh/mlazos/50/orig 2025-12-04T10:14:41.2328166Z * [new branch] gh/mlazos/51/base -> origin/gh/mlazos/51/base 2025-12-04T10:14:41.2328232Z * [new branch] gh/mlazos/51/head -> origin/gh/mlazos/51/head 2025-12-04T10:14:41.2328297Z * [new branch] gh/mlazos/51/orig -> origin/gh/mlazos/51/orig 2025-12-04T10:14:41.2328366Z * [new branch] gh/mlazos/52/base -> origin/gh/mlazos/52/base 2025-12-04T10:14:41.2328431Z * [new branch] gh/mlazos/52/head -> origin/gh/mlazos/52/head 2025-12-04T10:14:41.2328497Z * [new branch] gh/mlazos/52/orig -> origin/gh/mlazos/52/orig 2025-12-04T10:14:41.2328569Z * [new branch] gh/mlazos/53/base -> origin/gh/mlazos/53/base 2025-12-04T10:14:41.2328634Z * [new branch] gh/mlazos/53/head -> origin/gh/mlazos/53/head 2025-12-04T10:14:41.2328701Z * [new branch] gh/mlazos/53/orig -> origin/gh/mlazos/53/orig 2025-12-04T10:14:41.2328767Z * [new branch] gh/mlazos/54/base -> origin/gh/mlazos/54/base 2025-12-04T10:14:41.2328858Z * [new branch] gh/mlazos/54/head -> origin/gh/mlazos/54/head 2025-12-04T10:14:41.2328927Z * [new branch] gh/mlazos/54/orig -> origin/gh/mlazos/54/orig 2025-12-04T10:14:41.2328992Z * [new branch] gh/mlazos/55/base -> origin/gh/mlazos/55/base 2025-12-04T10:14:41.2329058Z * [new branch] gh/mlazos/55/head -> origin/gh/mlazos/55/head 2025-12-04T10:14:41.2329125Z * [new branch] gh/mlazos/55/orig -> origin/gh/mlazos/55/orig 2025-12-04T10:14:41.2329192Z * [new branch] gh/mlazos/56/base -> origin/gh/mlazos/56/base 2025-12-04T10:14:41.2329258Z * [new branch] gh/mlazos/56/head -> origin/gh/mlazos/56/head 2025-12-04T10:14:41.2329325Z * [new branch] gh/mlazos/56/orig -> origin/gh/mlazos/56/orig 2025-12-04T10:14:41.2329391Z * [new branch] gh/mlazos/57/base -> origin/gh/mlazos/57/base 2025-12-04T10:14:41.2329459Z * [new branch] gh/mlazos/57/head -> origin/gh/mlazos/57/head 2025-12-04T10:14:41.2329527Z * [new branch] gh/mlazos/57/orig -> origin/gh/mlazos/57/orig 2025-12-04T10:14:41.2329592Z * [new branch] gh/mlazos/58/base -> origin/gh/mlazos/58/base 2025-12-04T10:14:41.2329658Z * [new branch] gh/mlazos/58/head -> origin/gh/mlazos/58/head 2025-12-04T10:14:41.2329724Z * [new branch] gh/mlazos/58/orig -> origin/gh/mlazos/58/orig 2025-12-04T10:14:41.2329791Z * [new branch] gh/mlazos/59/base -> origin/gh/mlazos/59/base 2025-12-04T10:14:41.2329857Z * [new branch] gh/mlazos/59/head -> origin/gh/mlazos/59/head 2025-12-04T10:14:41.2329925Z * [new branch] gh/mlazos/59/orig -> origin/gh/mlazos/59/orig 2025-12-04T10:14:41.2329991Z * [new branch] gh/mlazos/60/base -> origin/gh/mlazos/60/base 2025-12-04T10:14:41.2330058Z * [new branch] gh/mlazos/60/head -> origin/gh/mlazos/60/head 2025-12-04T10:14:41.2330125Z * [new branch] gh/mlazos/60/orig -> origin/gh/mlazos/60/orig 2025-12-04T10:14:41.2330191Z * [new branch] gh/mlazos/61/base -> origin/gh/mlazos/61/base 2025-12-04T10:14:41.2330257Z * [new branch] gh/mlazos/61/head -> origin/gh/mlazos/61/head 2025-12-04T10:14:41.2330323Z * [new branch] gh/mlazos/61/orig -> origin/gh/mlazos/61/orig 2025-12-04T10:14:41.2330420Z * [new branch] gh/mlazos/62/base -> origin/gh/mlazos/62/base 2025-12-04T10:14:41.2330487Z * [new branch] gh/mlazos/62/head -> origin/gh/mlazos/62/head 2025-12-04T10:14:41.2330552Z * [new branch] gh/mlazos/62/orig -> origin/gh/mlazos/62/orig 2025-12-04T10:14:41.2330667Z * [new branch] gh/mlazos/63/base -> origin/gh/mlazos/63/base 2025-12-04T10:14:41.2330737Z * [new branch] gh/mlazos/63/head -> origin/gh/mlazos/63/head 2025-12-04T10:14:41.2330804Z * [new branch] gh/mlazos/63/orig -> origin/gh/mlazos/63/orig 2025-12-04T10:14:41.2330870Z * [new branch] gh/mlazos/64/base -> origin/gh/mlazos/64/base 2025-12-04T10:14:41.2330936Z * [new branch] gh/mlazos/64/head -> origin/gh/mlazos/64/head 2025-12-04T10:14:41.2331001Z * [new branch] gh/mlazos/64/orig -> origin/gh/mlazos/64/orig 2025-12-04T10:14:41.2331067Z * [new branch] gh/mlazos/65/base -> origin/gh/mlazos/65/base 2025-12-04T10:14:41.2331135Z * [new branch] gh/mlazos/65/head -> origin/gh/mlazos/65/head 2025-12-04T10:14:41.2331201Z * [new branch] gh/mlazos/65/orig -> origin/gh/mlazos/65/orig 2025-12-04T10:14:41.2331267Z * [new branch] gh/mlazos/66/base -> origin/gh/mlazos/66/base 2025-12-04T10:14:41.2331379Z * [new branch] gh/mlazos/66/head -> origin/gh/mlazos/66/head 2025-12-04T10:14:41.2331447Z * [new branch] gh/mlazos/66/orig -> origin/gh/mlazos/66/orig 2025-12-04T10:14:41.2331512Z * [new branch] gh/mlazos/67/base -> origin/gh/mlazos/67/base 2025-12-04T10:14:41.2331579Z * [new branch] gh/mlazos/67/head -> origin/gh/mlazos/67/head 2025-12-04T10:14:41.2331645Z * [new branch] gh/mlazos/67/orig -> origin/gh/mlazos/67/orig 2025-12-04T10:14:41.2331711Z * [new branch] gh/mlazos/68/base -> origin/gh/mlazos/68/base 2025-12-04T10:14:41.2331779Z * [new branch] gh/mlazos/68/head -> origin/gh/mlazos/68/head 2025-12-04T10:14:41.2331844Z * [new branch] gh/mlazos/68/orig -> origin/gh/mlazos/68/orig 2025-12-04T10:14:41.2331909Z * [new branch] gh/mlazos/69/base -> origin/gh/mlazos/69/base 2025-12-04T10:14:41.2331977Z * [new branch] gh/mlazos/69/head -> origin/gh/mlazos/69/head 2025-12-04T10:14:41.2332044Z * [new branch] gh/mlazos/69/orig -> origin/gh/mlazos/69/orig 2025-12-04T10:14:41.2332111Z * [new branch] gh/mlazos/70/base -> origin/gh/mlazos/70/base 2025-12-04T10:14:41.2332177Z * [new branch] gh/mlazos/70/head -> origin/gh/mlazos/70/head 2025-12-04T10:14:41.2332243Z * [new branch] gh/mlazos/70/orig -> origin/gh/mlazos/70/orig 2025-12-04T10:14:41.2332309Z * [new branch] gh/mlazos/71/base -> origin/gh/mlazos/71/base 2025-12-04T10:14:41.2332377Z * [new branch] gh/mlazos/71/head -> origin/gh/mlazos/71/head 2025-12-04T10:14:41.2332443Z * [new branch] gh/mlazos/71/orig -> origin/gh/mlazos/71/orig 2025-12-04T10:14:41.2332509Z * [new branch] gh/mlazos/72/base -> origin/gh/mlazos/72/base 2025-12-04T10:14:41.2332577Z * [new branch] gh/mlazos/72/head -> origin/gh/mlazos/72/head 2025-12-04T10:14:41.2332642Z * [new branch] gh/mlazos/72/orig -> origin/gh/mlazos/72/orig 2025-12-04T10:14:41.2332709Z * [new branch] gh/mlazos/73/base -> origin/gh/mlazos/73/base 2025-12-04T10:14:41.2332775Z * [new branch] gh/mlazos/73/head -> origin/gh/mlazos/73/head 2025-12-04T10:14:41.2332841Z * [new branch] gh/mlazos/73/orig -> origin/gh/mlazos/73/orig 2025-12-04T10:14:41.2332908Z * [new branch] gh/mrmiywj/1/base -> origin/gh/mrmiywj/1/base 2025-12-04T10:14:41.2333034Z * [new branch] gh/mrmiywj/1/head -> origin/gh/mrmiywj/1/head 2025-12-04T10:14:41.2333108Z * [new branch] gh/muchulee8/73/base -> origin/gh/muchulee8/73/base 2025-12-04T10:14:41.2333182Z * [new branch] gh/muchulee8/73/head -> origin/gh/muchulee8/73/head 2025-12-04T10:14:41.2333254Z * [new branch] gh/muchulee8/73/orig -> origin/gh/muchulee8/73/orig 2025-12-04T10:14:41.2333340Z * [new branch] gh/naveenthangudu/1/base -> origin/gh/naveenthangudu/1/base 2025-12-04T10:14:41.2333423Z * [new branch] gh/naveenthangudu/1/head -> origin/gh/naveenthangudu/1/head 2025-12-04T10:14:41.2333503Z * [new branch] gh/naveenthangudu/1/orig -> origin/gh/naveenthangudu/1/orig 2025-12-04T10:14:41.2333583Z * [new branch] gh/naveenthangudu/2/base -> origin/gh/naveenthangudu/2/base 2025-12-04T10:14:41.2333663Z * [new branch] gh/naveenthangudu/2/head -> origin/gh/naveenthangudu/2/head 2025-12-04T10:14:41.2333743Z * [new branch] gh/naveenthangudu/2/orig -> origin/gh/naveenthangudu/2/orig 2025-12-04T10:14:41.2333823Z * [new branch] gh/naveenthangudu/3/base -> origin/gh/naveenthangudu/3/base 2025-12-04T10:14:41.2333902Z * [new branch] gh/naveenthangudu/3/head -> origin/gh/naveenthangudu/3/head 2025-12-04T10:14:41.2334012Z * [new branch] gh/naveenthangudu/3/orig -> origin/gh/naveenthangudu/3/orig 2025-12-04T10:14:41.2334093Z * [new branch] gh/naveenthangudu/4/base -> origin/gh/naveenthangudu/4/base 2025-12-04T10:14:41.2334171Z * [new branch] gh/naveenthangudu/4/head -> origin/gh/naveenthangudu/4/head 2025-12-04T10:14:41.2334250Z * [new branch] gh/naveenthangudu/4/orig -> origin/gh/naveenthangudu/4/orig 2025-12-04T10:14:41.2334330Z * [new branch] gh/naveenthangudu/5/base -> origin/gh/naveenthangudu/5/base 2025-12-04T10:14:41.2334411Z * [new branch] gh/naveenthangudu/5/head -> origin/gh/naveenthangudu/5/head 2025-12-04T10:14:41.2334489Z * [new branch] gh/naveenthangudu/5/orig -> origin/gh/naveenthangudu/5/orig 2025-12-04T10:14:41.2334568Z * [new branch] gh/naveenthangudu/6/base -> origin/gh/naveenthangudu/6/base 2025-12-04T10:14:41.2334646Z * [new branch] gh/naveenthangudu/6/head -> origin/gh/naveenthangudu/6/head 2025-12-04T10:14:41.2334726Z * [new branch] gh/naveenthangudu/6/orig -> origin/gh/naveenthangudu/6/orig 2025-12-04T10:14:41.2334805Z * [new branch] gh/naveenthangudu/7/base -> origin/gh/naveenthangudu/7/base 2025-12-04T10:14:41.2334884Z * [new branch] gh/naveenthangudu/7/head -> origin/gh/naveenthangudu/7/head 2025-12-04T10:14:41.2334962Z * [new branch] gh/naveenthangudu/7/orig -> origin/gh/naveenthangudu/7/orig 2025-12-04T10:14:41.2335042Z * [new branch] gh/naveenthangudu/8/base -> origin/gh/naveenthangudu/8/base 2025-12-04T10:14:41.2335123Z * [new branch] gh/naveenthangudu/8/head -> origin/gh/naveenthangudu/8/head 2025-12-04T10:14:41.2335203Z * [new branch] gh/naveenthangudu/8/orig -> origin/gh/naveenthangudu/8/orig 2025-12-04T10:14:41.2335281Z * [new branch] gh/naveenthangudu/9/base -> origin/gh/naveenthangudu/9/base 2025-12-04T10:14:41.2335360Z * [new branch] gh/naveenthangudu/9/head -> origin/gh/naveenthangudu/9/head 2025-12-04T10:14:41.2335440Z * [new branch] gh/naveenthangudu/9/orig -> origin/gh/naveenthangudu/9/orig 2025-12-04T10:14:41.2335512Z * [new branch] gh/nikitaved/1/base -> origin/gh/nikitaved/1/base 2025-12-04T10:14:41.2335583Z * [new branch] gh/nikitaved/1/head -> origin/gh/nikitaved/1/head 2025-12-04T10:14:41.2335656Z * [new branch] gh/nikitaved/1/orig -> origin/gh/nikitaved/1/orig 2025-12-04T10:14:41.2335729Z * [new branch] gh/nikitaved/10/base -> origin/gh/nikitaved/10/base 2025-12-04T10:14:41.2335832Z * [new branch] gh/nikitaved/10/head -> origin/gh/nikitaved/10/head 2025-12-04T10:14:41.2335905Z * [new branch] gh/nikitaved/10/orig -> origin/gh/nikitaved/10/orig 2025-12-04T10:14:41.2335976Z * [new branch] gh/nikitaved/11/base -> origin/gh/nikitaved/11/base 2025-12-04T10:14:41.2336047Z * [new branch] gh/nikitaved/11/head -> origin/gh/nikitaved/11/head 2025-12-04T10:14:41.2336118Z * [new branch] gh/nikitaved/11/orig -> origin/gh/nikitaved/11/orig 2025-12-04T10:14:41.2336189Z * [new branch] gh/nikitaved/12/base -> origin/gh/nikitaved/12/base 2025-12-04T10:14:41.2336259Z * [new branch] gh/nikitaved/12/head -> origin/gh/nikitaved/12/head 2025-12-04T10:14:41.2336330Z * [new branch] gh/nikitaved/12/orig -> origin/gh/nikitaved/12/orig 2025-12-04T10:14:41.2336402Z * [new branch] gh/nikitaved/13/base -> origin/gh/nikitaved/13/base 2025-12-04T10:14:41.2336473Z * [new branch] gh/nikitaved/13/head -> origin/gh/nikitaved/13/head 2025-12-04T10:14:41.2336544Z * [new branch] gh/nikitaved/13/orig -> origin/gh/nikitaved/13/orig 2025-12-04T10:14:41.2336615Z * [new branch] gh/nikitaved/14/base -> origin/gh/nikitaved/14/base 2025-12-04T10:14:41.2336719Z * [new branch] gh/nikitaved/14/head -> origin/gh/nikitaved/14/head 2025-12-04T10:14:41.2336789Z * [new branch] gh/nikitaved/14/orig -> origin/gh/nikitaved/14/orig 2025-12-04T10:14:41.2336860Z * [new branch] gh/nikitaved/15/base -> origin/gh/nikitaved/15/base 2025-12-04T10:14:41.2336931Z * [new branch] gh/nikitaved/15/head -> origin/gh/nikitaved/15/head 2025-12-04T10:14:41.2337001Z * [new branch] gh/nikitaved/15/orig -> origin/gh/nikitaved/15/orig 2025-12-04T10:14:41.2337075Z * [new branch] gh/nikitaved/16/base -> origin/gh/nikitaved/16/base 2025-12-04T10:14:41.2337147Z * [new branch] gh/nikitaved/16/head -> origin/gh/nikitaved/16/head 2025-12-04T10:14:41.2337217Z * [new branch] gh/nikitaved/16/orig -> origin/gh/nikitaved/16/orig 2025-12-04T10:14:41.2337288Z * [new branch] gh/nikitaved/2/base -> origin/gh/nikitaved/2/base 2025-12-04T10:14:41.2337362Z * [new branch] gh/nikitaved/2/head -> origin/gh/nikitaved/2/head 2025-12-04T10:14:41.2337432Z * [new branch] gh/nikitaved/2/orig -> origin/gh/nikitaved/2/orig 2025-12-04T10:14:41.2337501Z * [new branch] gh/nikitaved/4/base -> origin/gh/nikitaved/4/base 2025-12-04T10:14:41.2337571Z * [new branch] gh/nikitaved/4/head -> origin/gh/nikitaved/4/head 2025-12-04T10:14:41.2337641Z * [new branch] gh/nikitaved/4/orig -> origin/gh/nikitaved/4/orig 2025-12-04T10:14:41.2337712Z * [new branch] gh/nikitaved/5/base -> origin/gh/nikitaved/5/base 2025-12-04T10:14:41.2337783Z * [new branch] gh/nikitaved/5/head -> origin/gh/nikitaved/5/head 2025-12-04T10:14:41.2337851Z * [new branch] gh/nikitaved/5/orig -> origin/gh/nikitaved/5/orig 2025-12-04T10:14:41.2337920Z * [new branch] gh/nikitaved/6/base -> origin/gh/nikitaved/6/base 2025-12-04T10:14:41.2337992Z * [new branch] gh/nikitaved/6/head -> origin/gh/nikitaved/6/head 2025-12-04T10:14:41.2338061Z * [new branch] gh/nikitaved/6/orig -> origin/gh/nikitaved/6/orig 2025-12-04T10:14:41.2338130Z * [new branch] gh/nikitaved/8/base -> origin/gh/nikitaved/8/base 2025-12-04T10:14:41.2338200Z * [new branch] gh/nikitaved/8/head -> origin/gh/nikitaved/8/head 2025-12-04T10:14:41.2338268Z * [new branch] gh/nikitaved/8/orig -> origin/gh/nikitaved/8/orig 2025-12-04T10:14:41.2338363Z * [new branch] gh/nikitaved/9/base -> origin/gh/nikitaved/9/base 2025-12-04T10:14:41.2338432Z * [new branch] gh/nikitaved/9/head -> origin/gh/nikitaved/9/head 2025-12-04T10:14:41.2338502Z * [new branch] gh/nikitaved/9/orig -> origin/gh/nikitaved/9/orig 2025-12-04T10:14:41.2338571Z * [new branch] gh/oulgen/10/base -> origin/gh/oulgen/10/base 2025-12-04T10:14:41.2338640Z * [new branch] gh/oulgen/10/head -> origin/gh/oulgen/10/head 2025-12-04T10:14:41.2338708Z * [new branch] gh/oulgen/10/orig -> origin/gh/oulgen/10/orig 2025-12-04T10:14:41.2338776Z * [new branch] gh/oulgen/11/base -> origin/gh/oulgen/11/base 2025-12-04T10:14:41.2338842Z * [new branch] gh/oulgen/11/head -> origin/gh/oulgen/11/head 2025-12-04T10:14:41.2338908Z * [new branch] gh/oulgen/11/orig -> origin/gh/oulgen/11/orig 2025-12-04T10:14:41.2338978Z * [new branch] gh/oulgen/12/base -> origin/gh/oulgen/12/base 2025-12-04T10:14:41.2339043Z * [new branch] gh/oulgen/12/head -> origin/gh/oulgen/12/head 2025-12-04T10:14:41.2339109Z * [new branch] gh/oulgen/12/orig -> origin/gh/oulgen/12/orig 2025-12-04T10:14:41.2339176Z * [new branch] gh/oulgen/13/base -> origin/gh/oulgen/13/base 2025-12-04T10:14:41.2339269Z * [new branch] gh/oulgen/13/head -> origin/gh/oulgen/13/head 2025-12-04T10:14:41.2339335Z * [new branch] gh/oulgen/13/orig -> origin/gh/oulgen/13/orig 2025-12-04T10:14:41.2339402Z * [new branch] gh/oulgen/14/base -> origin/gh/oulgen/14/base 2025-12-04T10:14:41.2339468Z * [new branch] gh/oulgen/14/head -> origin/gh/oulgen/14/head 2025-12-04T10:14:41.2339533Z * [new branch] gh/oulgen/14/orig -> origin/gh/oulgen/14/orig 2025-12-04T10:14:41.2339599Z * [new branch] gh/oulgen/15/base -> origin/gh/oulgen/15/base 2025-12-04T10:14:41.2339665Z * [new branch] gh/oulgen/15/head -> origin/gh/oulgen/15/head 2025-12-04T10:14:41.2339730Z * [new branch] gh/oulgen/15/orig -> origin/gh/oulgen/15/orig 2025-12-04T10:14:41.2339798Z * [new branch] gh/oulgen/16/base -> origin/gh/oulgen/16/base 2025-12-04T10:14:41.2339864Z * [new branch] gh/oulgen/16/head -> origin/gh/oulgen/16/head 2025-12-04T10:14:41.2339931Z * [new branch] gh/oulgen/16/orig -> origin/gh/oulgen/16/orig 2025-12-04T10:14:41.2339998Z * [new branch] gh/oulgen/17/base -> origin/gh/oulgen/17/base 2025-12-04T10:14:41.2340064Z * [new branch] gh/oulgen/17/head -> origin/gh/oulgen/17/head 2025-12-04T10:14:41.2340132Z * [new branch] gh/oulgen/17/orig -> origin/gh/oulgen/17/orig 2025-12-04T10:14:41.2340198Z * [new branch] gh/oulgen/18/base -> origin/gh/oulgen/18/base 2025-12-04T10:14:41.2340264Z * [new branch] gh/oulgen/18/head -> origin/gh/oulgen/18/head 2025-12-04T10:14:41.2340332Z * [new branch] gh/oulgen/18/orig -> origin/gh/oulgen/18/orig 2025-12-04T10:14:41.2340398Z * [new branch] gh/oulgen/19/base -> origin/gh/oulgen/19/base 2025-12-04T10:14:41.2340464Z * [new branch] gh/oulgen/19/head -> origin/gh/oulgen/19/head 2025-12-04T10:14:41.2340531Z * [new branch] gh/oulgen/19/orig -> origin/gh/oulgen/19/orig 2025-12-04T10:14:41.2340651Z * [new branch] gh/oulgen/20/base -> origin/gh/oulgen/20/base 2025-12-04T10:14:41.2340718Z * [new branch] gh/oulgen/20/head -> origin/gh/oulgen/20/head 2025-12-04T10:14:41.2340784Z * [new branch] gh/oulgen/20/orig -> origin/gh/oulgen/20/orig 2025-12-04T10:14:41.2340850Z * [new branch] gh/oulgen/21/base -> origin/gh/oulgen/21/base 2025-12-04T10:14:41.2340960Z * [new branch] gh/oulgen/21/head -> origin/gh/oulgen/21/head 2025-12-04T10:14:41.2341028Z * [new branch] gh/oulgen/21/orig -> origin/gh/oulgen/21/orig 2025-12-04T10:14:41.2341094Z * [new branch] gh/oulgen/22/base -> origin/gh/oulgen/22/base 2025-12-04T10:14:41.2341160Z * [new branch] gh/oulgen/22/head -> origin/gh/oulgen/22/head 2025-12-04T10:14:41.2341227Z * [new branch] gh/oulgen/22/orig -> origin/gh/oulgen/22/orig 2025-12-04T10:14:41.2341292Z * [new branch] gh/oulgen/23/base -> origin/gh/oulgen/23/base 2025-12-04T10:14:41.2341360Z * [new branch] gh/oulgen/23/head -> origin/gh/oulgen/23/head 2025-12-04T10:14:41.2341429Z * [new branch] gh/oulgen/23/orig -> origin/gh/oulgen/23/orig 2025-12-04T10:14:41.2341494Z * [new branch] gh/oulgen/24/base -> origin/gh/oulgen/24/base 2025-12-04T10:14:41.2341563Z * [new branch] gh/oulgen/24/head -> origin/gh/oulgen/24/head 2025-12-04T10:14:41.2341630Z * [new branch] gh/oulgen/24/orig -> origin/gh/oulgen/24/orig 2025-12-04T10:14:41.2341696Z * [new branch] gh/oulgen/25/base -> origin/gh/oulgen/25/base 2025-12-04T10:14:41.2341801Z * [new branch] gh/oulgen/25/head -> origin/gh/oulgen/25/head 2025-12-04T10:14:41.2341868Z * [new branch] gh/oulgen/25/orig -> origin/gh/oulgen/25/orig 2025-12-04T10:14:41.2341933Z * [new branch] gh/oulgen/26/base -> origin/gh/oulgen/26/base 2025-12-04T10:14:41.2342000Z * [new branch] gh/oulgen/26/head -> origin/gh/oulgen/26/head 2025-12-04T10:14:41.2342066Z * [new branch] gh/oulgen/26/orig -> origin/gh/oulgen/26/orig 2025-12-04T10:14:41.2342132Z * [new branch] gh/oulgen/4/base -> origin/gh/oulgen/4/base 2025-12-04T10:14:41.2342203Z * [new branch] gh/oulgen/4/head -> origin/gh/oulgen/4/head 2025-12-04T10:14:41.2342268Z * [new branch] gh/oulgen/4/orig -> origin/gh/oulgen/4/orig 2025-12-04T10:14:41.2342334Z * [new branch] gh/oulgen/7/base -> origin/gh/oulgen/7/base 2025-12-04T10:14:41.2342400Z * [new branch] gh/oulgen/7/head -> origin/gh/oulgen/7/head 2025-12-04T10:14:41.2342469Z * [new branch] gh/oulgen/7/orig -> origin/gh/oulgen/7/orig 2025-12-04T10:14:41.2342534Z * [new branch] gh/oulgen/8/base -> origin/gh/oulgen/8/base 2025-12-04T10:14:41.2342600Z * [new branch] gh/oulgen/8/head -> origin/gh/oulgen/8/head 2025-12-04T10:14:41.2342665Z * [new branch] gh/oulgen/8/orig -> origin/gh/oulgen/8/orig 2025-12-04T10:14:41.2342730Z * [new branch] gh/oulgen/9/base -> origin/gh/oulgen/9/base 2025-12-04T10:14:41.2342798Z * [new branch] gh/oulgen/9/head -> origin/gh/oulgen/9/head 2025-12-04T10:14:41.2342863Z * [new branch] gh/oulgen/9/orig -> origin/gh/oulgen/9/orig 2025-12-04T10:14:41.2342967Z * [new branch] gh/patvig/mtia-serialization -> origin/gh/patvig/mtia-serialization 2025-12-04T10:14:41.2343036Z * [new branch] gh/pearu/108/base -> origin/gh/pearu/108/base 2025-12-04T10:14:41.2343106Z * [new branch] gh/pearu/108/head -> origin/gh/pearu/108/head 2025-12-04T10:14:41.2343173Z * [new branch] gh/pearu/108/orig -> origin/gh/pearu/108/orig 2025-12-04T10:14:41.2343241Z * [new branch] gh/pearu/109/base -> origin/gh/pearu/109/base 2025-12-04T10:14:41.2343306Z * [new branch] gh/pearu/109/head -> origin/gh/pearu/109/head 2025-12-04T10:14:41.2343373Z * [new branch] gh/pearu/109/orig -> origin/gh/pearu/109/orig 2025-12-04T10:14:41.2343475Z * [new branch] gh/pearu/110/base -> origin/gh/pearu/110/base 2025-12-04T10:14:41.2343540Z * [new branch] gh/pearu/110/head -> origin/gh/pearu/110/head 2025-12-04T10:14:41.2343611Z * [new branch] gh/pearu/110/orig -> origin/gh/pearu/110/orig 2025-12-04T10:14:41.2343678Z * [new branch] gh/pearu/111/base -> origin/gh/pearu/111/base 2025-12-04T10:14:41.2343746Z * [new branch] gh/pearu/111/head -> origin/gh/pearu/111/head 2025-12-04T10:14:41.2343812Z * [new branch] gh/pearu/111/orig -> origin/gh/pearu/111/orig 2025-12-04T10:14:41.2343877Z * [new branch] gh/pearu/112/base -> origin/gh/pearu/112/base 2025-12-04T10:14:41.2343943Z * [new branch] gh/pearu/112/head -> origin/gh/pearu/112/head 2025-12-04T10:14:41.2344011Z * [new branch] gh/pearu/112/orig -> origin/gh/pearu/112/orig 2025-12-04T10:14:41.2344079Z * [new branch] gh/pearu/115/base -> origin/gh/pearu/115/base 2025-12-04T10:14:41.2344145Z * [new branch] gh/pearu/115/head -> origin/gh/pearu/115/head 2025-12-04T10:14:41.2344212Z * [new branch] gh/pearu/115/orig -> origin/gh/pearu/115/orig 2025-12-04T10:14:41.2344278Z * [new branch] gh/pearu/116/base -> origin/gh/pearu/116/base 2025-12-04T10:14:41.2344371Z * [new branch] gh/pearu/116/head -> origin/gh/pearu/116/head 2025-12-04T10:14:41.2344440Z * [new branch] gh/pearu/116/orig -> origin/gh/pearu/116/orig 2025-12-04T10:14:41.2344506Z * [new branch] gh/pearu/117/base -> origin/gh/pearu/117/base 2025-12-04T10:14:41.2344572Z * [new branch] gh/pearu/117/head -> origin/gh/pearu/117/head 2025-12-04T10:14:41.2344639Z * [new branch] gh/pearu/117/orig -> origin/gh/pearu/117/orig 2025-12-04T10:14:41.2344706Z * [new branch] gh/pearu/118/base -> origin/gh/pearu/118/base 2025-12-04T10:14:41.2344773Z * [new branch] gh/pearu/118/head -> origin/gh/pearu/118/head 2025-12-04T10:14:41.2344840Z * [new branch] gh/pearu/118/orig -> origin/gh/pearu/118/orig 2025-12-04T10:14:41.2344907Z * [new branch] gh/pearu/119/base -> origin/gh/pearu/119/base 2025-12-04T10:14:41.2344976Z * [new branch] gh/pearu/119/head -> origin/gh/pearu/119/head 2025-12-04T10:14:41.2345042Z * [new branch] gh/pearu/119/orig -> origin/gh/pearu/119/orig 2025-12-04T10:14:41.2345108Z * [new branch] gh/pearu/139/base -> origin/gh/pearu/139/base 2025-12-04T10:14:41.2345175Z * [new branch] gh/pearu/139/head -> origin/gh/pearu/139/head 2025-12-04T10:14:41.2345240Z * [new branch] gh/pearu/139/orig -> origin/gh/pearu/139/orig 2025-12-04T10:14:41.2345306Z * [new branch] gh/pearu/140/base -> origin/gh/pearu/140/base 2025-12-04T10:14:41.2345375Z * [new branch] gh/pearu/140/head -> origin/gh/pearu/140/head 2025-12-04T10:14:41.2345441Z * [new branch] gh/pearu/140/orig -> origin/gh/pearu/140/orig 2025-12-04T10:14:41.2345509Z * [new branch] gh/pearu/142/base -> origin/gh/pearu/142/base 2025-12-04T10:14:41.2345576Z * [new branch] gh/pearu/142/head -> origin/gh/pearu/142/head 2025-12-04T10:14:41.2345643Z * [new branch] gh/pearu/142/orig -> origin/gh/pearu/142/orig 2025-12-04T10:14:41.2345709Z * [new branch] gh/pearu/143/base -> origin/gh/pearu/143/base 2025-12-04T10:14:41.2345779Z * [new branch] gh/pearu/143/head -> origin/gh/pearu/143/head 2025-12-04T10:14:41.2345846Z * [new branch] gh/pearu/143/orig -> origin/gh/pearu/143/orig 2025-12-04T10:14:41.2345913Z * [new branch] gh/pearu/147/base -> origin/gh/pearu/147/base 2025-12-04T10:14:41.2346011Z * [new branch] gh/pearu/147/head -> origin/gh/pearu/147/head 2025-12-04T10:14:41.2346078Z * [new branch] gh/pearu/147/orig -> origin/gh/pearu/147/orig 2025-12-04T10:14:41.2346145Z * [new branch] gh/pearu/149/base -> origin/gh/pearu/149/base 2025-12-04T10:14:41.2346217Z * [new branch] gh/pearu/149/head -> origin/gh/pearu/149/head 2025-12-04T10:14:41.2346285Z * [new branch] gh/pearu/149/orig -> origin/gh/pearu/149/orig 2025-12-04T10:14:41.2346354Z * [new branch] gh/pearu/150/base -> origin/gh/pearu/150/base 2025-12-04T10:14:41.2346425Z * [new branch] gh/pearu/150/head -> origin/gh/pearu/150/head 2025-12-04T10:14:41.2346491Z * [new branch] gh/pearu/150/orig -> origin/gh/pearu/150/orig 2025-12-04T10:14:41.2346560Z * [new branch] gh/pearu/151/base -> origin/gh/pearu/151/base 2025-12-04T10:14:41.2346630Z * [new branch] gh/pearu/151/head -> origin/gh/pearu/151/head 2025-12-04T10:14:41.2346697Z * [new branch] gh/pearu/151/orig -> origin/gh/pearu/151/orig 2025-12-04T10:14:41.2346768Z * [new branch] gh/pearu/152/base -> origin/gh/pearu/152/base 2025-12-04T10:14:41.2352320Z * [new branch] gh/pearu/152/head -> origin/gh/pearu/152/head 2025-12-04T10:14:41.2352469Z * [new branch] gh/pearu/152/orig -> origin/gh/pearu/152/orig 2025-12-04T10:14:41.2352540Z * [new branch] gh/pearu/153/base -> origin/gh/pearu/153/base 2025-12-04T10:14:41.2352609Z * [new branch] gh/pearu/153/head -> origin/gh/pearu/153/head 2025-12-04T10:14:41.2352677Z * [new branch] gh/pearu/153/orig -> origin/gh/pearu/153/orig 2025-12-04T10:14:41.2352745Z * [new branch] gh/pearu/154/base -> origin/gh/pearu/154/base 2025-12-04T10:14:41.2352817Z * [new branch] gh/pearu/154/head -> origin/gh/pearu/154/head 2025-12-04T10:14:41.2352883Z * [new branch] gh/pearu/154/orig -> origin/gh/pearu/154/orig 2025-12-04T10:14:41.2352952Z * [new branch] gh/pearu/155/base -> origin/gh/pearu/155/base 2025-12-04T10:14:41.2353018Z * [new branch] gh/pearu/155/head -> origin/gh/pearu/155/head 2025-12-04T10:14:41.2353087Z * [new branch] gh/pearu/155/orig -> origin/gh/pearu/155/orig 2025-12-04T10:14:41.2353156Z * [new branch] gh/pearu/156/base -> origin/gh/pearu/156/base 2025-12-04T10:14:41.2353222Z * [new branch] gh/pearu/156/head -> origin/gh/pearu/156/head 2025-12-04T10:14:41.2353288Z * [new branch] gh/pearu/156/orig -> origin/gh/pearu/156/orig 2025-12-04T10:14:41.2353357Z * [new branch] gh/pearu/56/base -> origin/gh/pearu/56/base 2025-12-04T10:14:41.2353425Z * [new branch] gh/pearu/56/head -> origin/gh/pearu/56/head 2025-12-04T10:14:41.2353490Z * [new branch] gh/pearu/56/orig -> origin/gh/pearu/56/orig 2025-12-04T10:14:41.2353558Z * [new branch] gh/pearu/97/base -> origin/gh/pearu/97/base 2025-12-04T10:14:41.2353623Z * [new branch] gh/pearu/97/head -> origin/gh/pearu/97/head 2025-12-04T10:14:41.2353691Z * [new branch] gh/pearu/97/orig -> origin/gh/pearu/97/orig 2025-12-04T10:14:41.2353768Z * [new branch] gh/pianpwk/21/base -> origin/gh/pianpwk/21/base 2025-12-04T10:14:41.2353839Z * [new branch] gh/pianpwk/21/head -> origin/gh/pianpwk/21/head 2025-12-04T10:14:41.2353910Z * [new branch] gh/pianpwk/28/base -> origin/gh/pianpwk/28/base 2025-12-04T10:14:41.2353980Z * [new branch] gh/pianpwk/28/head -> origin/gh/pianpwk/28/head 2025-12-04T10:14:41.2354090Z * [new branch] gh/pianpwk/28/orig -> origin/gh/pianpwk/28/orig 2025-12-04T10:14:41.2354162Z * [new branch] gh/pianpwk/29/base -> origin/gh/pianpwk/29/base 2025-12-04T10:14:41.2354230Z * [new branch] gh/pianpwk/29/head -> origin/gh/pianpwk/29/head 2025-12-04T10:14:41.2354299Z * [new branch] gh/pianpwk/29/orig -> origin/gh/pianpwk/29/orig 2025-12-04T10:14:41.2354372Z * [new branch] gh/pianpwk/30/base -> origin/gh/pianpwk/30/base 2025-12-04T10:14:41.2354441Z * [new branch] gh/pianpwk/30/head -> origin/gh/pianpwk/30/head 2025-12-04T10:14:41.2354509Z * [new branch] gh/pianpwk/30/orig -> origin/gh/pianpwk/30/orig 2025-12-04T10:14:41.2354581Z * [new branch] gh/pianpwk/31/base -> origin/gh/pianpwk/31/base 2025-12-04T10:14:41.2354649Z * [new branch] gh/pianpwk/31/head -> origin/gh/pianpwk/31/head 2025-12-04T10:14:41.2354718Z * [new branch] gh/pianpwk/31/orig -> origin/gh/pianpwk/31/orig 2025-12-04T10:14:41.2354789Z * [new branch] gh/pianpwk/32/base -> origin/gh/pianpwk/32/base 2025-12-04T10:14:41.2354858Z * [new branch] gh/pianpwk/32/head -> origin/gh/pianpwk/32/head 2025-12-04T10:14:41.2354926Z * [new branch] gh/pianpwk/32/orig -> origin/gh/pianpwk/32/orig 2025-12-04T10:14:41.2355026Z * [new branch] gh/pianpwk/33/base -> origin/gh/pianpwk/33/base 2025-12-04T10:14:41.2355095Z * [new branch] gh/pianpwk/33/head -> origin/gh/pianpwk/33/head 2025-12-04T10:14:41.2355165Z * [new branch] gh/pianpwk/33/orig -> origin/gh/pianpwk/33/orig 2025-12-04T10:14:41.2355235Z * [new branch] gh/pianpwk/34/base -> origin/gh/pianpwk/34/base 2025-12-04T10:14:41.2355303Z * [new branch] gh/pianpwk/34/head -> origin/gh/pianpwk/34/head 2025-12-04T10:14:41.2355372Z * [new branch] gh/pianpwk/34/orig -> origin/gh/pianpwk/34/orig 2025-12-04T10:14:41.2355443Z * [new branch] gh/pianpwk/35/base -> origin/gh/pianpwk/35/base 2025-12-04T10:14:41.2355512Z * [new branch] gh/pianpwk/35/head -> origin/gh/pianpwk/35/head 2025-12-04T10:14:41.2355582Z * [new branch] gh/pianpwk/35/orig -> origin/gh/pianpwk/35/orig 2025-12-04T10:14:41.2355651Z * [new branch] gh/rec/141/base -> origin/gh/rec/141/base 2025-12-04T10:14:41.2355717Z * [new branch] gh/rec/141/head -> origin/gh/rec/141/head 2025-12-04T10:14:41.2355784Z * [new branch] gh/rec/153/base -> origin/gh/rec/153/base 2025-12-04T10:14:41.2355847Z * [new branch] gh/rec/153/head -> origin/gh/rec/153/head 2025-12-04T10:14:41.2355911Z * [new branch] gh/rec/153/orig -> origin/gh/rec/153/orig 2025-12-04T10:14:41.2355975Z * [new branch] gh/rec/154/base -> origin/gh/rec/154/base 2025-12-04T10:14:41.2356039Z * [new branch] gh/rec/154/head -> origin/gh/rec/154/head 2025-12-04T10:14:41.2356102Z * [new branch] gh/rec/154/orig -> origin/gh/rec/154/orig 2025-12-04T10:14:41.2356167Z * [new branch] gh/rec/164/base -> origin/gh/rec/164/base 2025-12-04T10:14:41.2356233Z * [new branch] gh/rec/164/head -> origin/gh/rec/164/head 2025-12-04T10:14:41.2356296Z * [new branch] gh/rec/164/orig -> origin/gh/rec/164/orig 2025-12-04T10:14:41.2356359Z * [new branch] gh/rec/166/base -> origin/gh/rec/166/base 2025-12-04T10:14:41.2356421Z * [new branch] gh/rec/166/head -> origin/gh/rec/166/head 2025-12-04T10:14:41.2356484Z * [new branch] gh/rec/166/orig -> origin/gh/rec/166/orig 2025-12-04T10:14:41.2356549Z * [new branch] gh/rec/167/base -> origin/gh/rec/167/base 2025-12-04T10:14:41.2356638Z * [new branch] gh/rec/167/head -> origin/gh/rec/167/head 2025-12-04T10:14:41.2356700Z * [new branch] gh/rec/167/orig -> origin/gh/rec/167/orig 2025-12-04T10:14:41.2356765Z * [new branch] gh/rec/168/base -> origin/gh/rec/168/base 2025-12-04T10:14:41.2356828Z * [new branch] gh/rec/168/head -> origin/gh/rec/168/head 2025-12-04T10:14:41.2356892Z * [new branch] gh/rec/168/orig -> origin/gh/rec/168/orig 2025-12-04T10:14:41.2356959Z * [new branch] gh/rec/169/base -> origin/gh/rec/169/base 2025-12-04T10:14:41.2357021Z * [new branch] gh/rec/169/head -> origin/gh/rec/169/head 2025-12-04T10:14:41.2357083Z * [new branch] gh/rec/169/orig -> origin/gh/rec/169/orig 2025-12-04T10:14:41.2357148Z * [new branch] gh/rec/170/base -> origin/gh/rec/170/base 2025-12-04T10:14:41.2357214Z * [new branch] gh/rec/170/head -> origin/gh/rec/170/head 2025-12-04T10:14:41.2357277Z * [new branch] gh/rec/170/orig -> origin/gh/rec/170/orig 2025-12-04T10:14:41.2357343Z * [new branch] gh/rec/171/base -> origin/gh/rec/171/base 2025-12-04T10:14:41.2357405Z * [new branch] gh/rec/171/head -> origin/gh/rec/171/head 2025-12-04T10:14:41.2357509Z * [new branch] gh/rec/171/orig -> origin/gh/rec/171/orig 2025-12-04T10:14:41.2357573Z * [new branch] gh/rec/172/base -> origin/gh/rec/172/base 2025-12-04T10:14:41.2357636Z * [new branch] gh/rec/172/head -> origin/gh/rec/172/head 2025-12-04T10:14:41.2357706Z * [new branch] gh/rec/172/orig -> origin/gh/rec/172/orig 2025-12-04T10:14:41.2357769Z * [new branch] gh/rec/173/base -> origin/gh/rec/173/base 2025-12-04T10:14:41.2357832Z * [new branch] gh/rec/173/head -> origin/gh/rec/173/head 2025-12-04T10:14:41.2357899Z * [new branch] gh/rec/173/orig -> origin/gh/rec/173/orig 2025-12-04T10:14:41.2357962Z * [new branch] gh/rec/174/base -> origin/gh/rec/174/base 2025-12-04T10:14:41.2358024Z * [new branch] gh/rec/174/head -> origin/gh/rec/174/head 2025-12-04T10:14:41.2358093Z * [new branch] gh/rec/174/orig -> origin/gh/rec/174/orig 2025-12-04T10:14:41.2358157Z * [new branch] gh/rec/175/base -> origin/gh/rec/175/base 2025-12-04T10:14:41.2358222Z * [new branch] gh/rec/175/head -> origin/gh/rec/175/head 2025-12-04T10:14:41.2358287Z * [new branch] gh/rec/175/orig -> origin/gh/rec/175/orig 2025-12-04T10:14:41.2358350Z * [new branch] gh/rec/176/base -> origin/gh/rec/176/base 2025-12-04T10:14:41.2358413Z * [new branch] gh/rec/176/head -> origin/gh/rec/176/head 2025-12-04T10:14:41.2358479Z * [new branch] gh/rec/176/orig -> origin/gh/rec/176/orig 2025-12-04T10:14:41.2358542Z * [new branch] gh/rec/177/base -> origin/gh/rec/177/base 2025-12-04T10:14:41.2358603Z * [new branch] gh/rec/177/head -> origin/gh/rec/177/head 2025-12-04T10:14:41.2358667Z * [new branch] gh/rec/177/orig -> origin/gh/rec/177/orig 2025-12-04T10:14:41.2358758Z * [new branch] gh/robert-hardwick/3/base -> origin/gh/robert-hardwick/3/base 2025-12-04T10:14:41.2358844Z * [new branch] gh/robert-hardwick/3/head -> origin/gh/robert-hardwick/3/head 2025-12-04T10:14:41.2358926Z * [new branch] gh/robert-hardwick/3/orig -> origin/gh/robert-hardwick/3/orig 2025-12-04T10:14:41.2359007Z * [new branch] gh/robert-hardwick/4/base -> origin/gh/robert-hardwick/4/base 2025-12-04T10:14:41.2359089Z * [new branch] gh/robert-hardwick/4/head -> origin/gh/robert-hardwick/4/head 2025-12-04T10:14:41.2359199Z * [new branch] gh/robert-hardwick/4/orig -> origin/gh/robert-hardwick/4/orig 2025-12-04T10:14:41.2359280Z * [new branch] gh/robert-hardwick/5/base -> origin/gh/robert-hardwick/5/base 2025-12-04T10:14:41.2359362Z * [new branch] gh/robert-hardwick/5/head -> origin/gh/robert-hardwick/5/head 2025-12-04T10:14:41.2359444Z * [new branch] gh/robert-hardwick/5/orig -> origin/gh/robert-hardwick/5/orig 2025-12-04T10:14:41.2359524Z * [new branch] gh/robert-hardwick/6/base -> origin/gh/robert-hardwick/6/base 2025-12-04T10:14:41.2359604Z * [new branch] gh/robert-hardwick/6/head -> origin/gh/robert-hardwick/6/head 2025-12-04T10:14:41.2359686Z * [new branch] gh/robert-hardwick/6/orig -> origin/gh/robert-hardwick/6/orig 2025-12-04T10:14:41.2359766Z * [new branch] gh/robert-hardwick/7/base -> origin/gh/robert-hardwick/7/base 2025-12-04T10:14:41.2359851Z * [new branch] gh/robert-hardwick/7/head -> origin/gh/robert-hardwick/7/head 2025-12-04T10:14:41.2359932Z * [new branch] gh/robert-hardwick/7/orig -> origin/gh/robert-hardwick/7/orig 2025-12-04T10:14:41.2360013Z * [new branch] gh/robert-hardwick/8/base -> origin/gh/robert-hardwick/8/base 2025-12-04T10:14:41.2360127Z * [new branch] gh/robert-hardwick/8/head -> origin/gh/robert-hardwick/8/head 2025-12-04T10:14:41.2360209Z * [new branch] gh/robert-hardwick/8/orig -> origin/gh/robert-hardwick/8/orig 2025-12-04T10:14:41.2360291Z * [new branch] gh/robert-hardwick/9/base -> origin/gh/robert-hardwick/9/base 2025-12-04T10:14:41.2360371Z * [new branch] gh/robert-hardwick/9/head -> origin/gh/robert-hardwick/9/head 2025-12-04T10:14:41.2360453Z * [new branch] gh/robert-hardwick/9/orig -> origin/gh/robert-hardwick/9/orig 2025-12-04T10:14:41.2360525Z * [new branch] gh/rtimpe/1/base -> origin/gh/rtimpe/1/base 2025-12-04T10:14:41.2360595Z * [new branch] gh/rtimpe/1/head -> origin/gh/rtimpe/1/head 2025-12-04T10:14:41.2360710Z * [new branch] gh/rtimpe/2/base -> origin/gh/rtimpe/2/base 2025-12-04T10:14:41.2360777Z * [new branch] gh/rtimpe/2/head -> origin/gh/rtimpe/2/head 2025-12-04T10:14:41.2360848Z * [new branch] gh/rtimpe/22/base -> origin/gh/rtimpe/22/base 2025-12-04T10:14:41.2360915Z * [new branch] gh/rtimpe/22/head -> origin/gh/rtimpe/22/head 2025-12-04T10:14:41.2360984Z * [new branch] gh/rtimpe/22/orig -> origin/gh/rtimpe/22/orig 2025-12-04T10:14:41.2361050Z * [new branch] gh/rtimpe/23/base -> origin/gh/rtimpe/23/base 2025-12-04T10:14:41.2361116Z * [new branch] gh/rtimpe/23/head -> origin/gh/rtimpe/23/head 2025-12-04T10:14:41.2361184Z * [new branch] gh/rtimpe/23/orig -> origin/gh/rtimpe/23/orig 2025-12-04T10:14:41.2361252Z * [new branch] gh/rtimpe/24/base -> origin/gh/rtimpe/24/base 2025-12-04T10:14:41.2361318Z * [new branch] gh/rtimpe/24/head -> origin/gh/rtimpe/24/head 2025-12-04T10:14:41.2361387Z * [new branch] gh/rtimpe/24/orig -> origin/gh/rtimpe/24/orig 2025-12-04T10:14:41.2361455Z * [new branch] gh/rtimpe/25/base -> origin/gh/rtimpe/25/base 2025-12-04T10:14:41.2361521Z * [new branch] gh/rtimpe/25/head -> origin/gh/rtimpe/25/head 2025-12-04T10:14:41.2361588Z * [new branch] gh/rtimpe/25/orig -> origin/gh/rtimpe/25/orig 2025-12-04T10:14:41.2361655Z * [new branch] gh/rtimpe/26/base -> origin/gh/rtimpe/26/base 2025-12-04T10:14:41.2361720Z * [new branch] gh/rtimpe/26/head -> origin/gh/rtimpe/26/head 2025-12-04T10:14:41.2361788Z * [new branch] gh/rtimpe/26/orig -> origin/gh/rtimpe/26/orig 2025-12-04T10:14:41.2361897Z * [new branch] gh/rtimpe/27/base -> origin/gh/rtimpe/27/base 2025-12-04T10:14:41.2361965Z * [new branch] gh/rtimpe/27/head -> origin/gh/rtimpe/27/head 2025-12-04T10:14:41.2362032Z * [new branch] gh/rtimpe/27/orig -> origin/gh/rtimpe/27/orig 2025-12-04T10:14:41.2362097Z * [new branch] gh/rtimpe/28/base -> origin/gh/rtimpe/28/base 2025-12-04T10:14:41.2362166Z * [new branch] gh/rtimpe/28/head -> origin/gh/rtimpe/28/head 2025-12-04T10:14:41.2362232Z * [new branch] gh/rtimpe/28/orig -> origin/gh/rtimpe/28/orig 2025-12-04T10:14:41.2362298Z * [new branch] gh/rtimpe/29/base -> origin/gh/rtimpe/29/base 2025-12-04T10:14:41.2362364Z * [new branch] gh/rtimpe/29/head -> origin/gh/rtimpe/29/head 2025-12-04T10:14:41.2362429Z * [new branch] gh/rtimpe/29/orig -> origin/gh/rtimpe/29/orig 2025-12-04T10:14:41.2362497Z * [new branch] gh/rtimpe/3/base -> origin/gh/rtimpe/3/base 2025-12-04T10:14:41.2362563Z * [new branch] gh/rtimpe/3/head -> origin/gh/rtimpe/3/head 2025-12-04T10:14:41.2362629Z * [new branch] gh/rtimpe/30/base -> origin/gh/rtimpe/30/base 2025-12-04T10:14:41.2362695Z * [new branch] gh/rtimpe/30/head -> origin/gh/rtimpe/30/head 2025-12-04T10:14:41.2362800Z * [new branch] gh/rtimpe/30/orig -> origin/gh/rtimpe/30/orig 2025-12-04T10:14:41.2362868Z * [new branch] gh/rtimpe/31/base -> origin/gh/rtimpe/31/base 2025-12-04T10:14:41.2362935Z * [new branch] gh/rtimpe/31/head -> origin/gh/rtimpe/31/head 2025-12-04T10:14:41.2363003Z * [new branch] gh/rtimpe/31/orig -> origin/gh/rtimpe/31/orig 2025-12-04T10:14:41.2363069Z * [new branch] gh/rtimpe/32/base -> origin/gh/rtimpe/32/base 2025-12-04T10:14:41.2363138Z * [new branch] gh/rtimpe/32/head -> origin/gh/rtimpe/32/head 2025-12-04T10:14:41.2363205Z * [new branch] gh/rtimpe/32/orig -> origin/gh/rtimpe/32/orig 2025-12-04T10:14:41.2363271Z * [new branch] gh/rtimpe/33/base -> origin/gh/rtimpe/33/base 2025-12-04T10:14:41.2363336Z * [new branch] gh/rtimpe/33/head -> origin/gh/rtimpe/33/head 2025-12-04T10:14:41.2363404Z * [new branch] gh/rtimpe/33/orig -> origin/gh/rtimpe/33/orig 2025-12-04T10:14:41.2363470Z * [new branch] gh/rtimpe/34/base -> origin/gh/rtimpe/34/base 2025-12-04T10:14:41.2363538Z * [new branch] gh/rtimpe/34/head -> origin/gh/rtimpe/34/head 2025-12-04T10:14:41.2363603Z * [new branch] gh/rtimpe/34/orig -> origin/gh/rtimpe/34/orig 2025-12-04T10:14:41.2363668Z * [new branch] gh/rtimpe/35/base -> origin/gh/rtimpe/35/base 2025-12-04T10:14:41.2363739Z * [new branch] gh/rtimpe/35/head -> origin/gh/rtimpe/35/head 2025-12-04T10:14:41.2363805Z * [new branch] gh/rtimpe/35/orig -> origin/gh/rtimpe/35/orig 2025-12-04T10:14:41.2363873Z * [new branch] gh/rtimpe/4/base -> origin/gh/rtimpe/4/base 2025-12-04T10:14:41.2363940Z * [new branch] gh/rtimpe/4/head -> origin/gh/rtimpe/4/head 2025-12-04T10:14:41.2364022Z * [new branch] gh/ruisizhang123/1/base -> origin/gh/ruisizhang123/1/base 2025-12-04T10:14:41.2364101Z * [new branch] gh/ruisizhang123/1/head -> origin/gh/ruisizhang123/1/head 2025-12-04T10:14:41.2364178Z * [new branch] gh/ruisizhang123/1/orig -> origin/gh/ruisizhang123/1/orig 2025-12-04T10:14:41.2364253Z * [new branch] gh/ruisizhang123/4/base -> origin/gh/ruisizhang123/4/base 2025-12-04T10:14:41.2364327Z * [new branch] gh/ruisizhang123/4/head -> origin/gh/ruisizhang123/4/head 2025-12-04T10:14:41.2364435Z * [new branch] gh/ruisizhang123/4/orig -> origin/gh/ruisizhang123/4/orig 2025-12-04T10:14:41.2364510Z * [new branch] gh/ruisizhang123/5/base -> origin/gh/ruisizhang123/5/base 2025-12-04T10:14:41.2364584Z * [new branch] gh/ruisizhang123/5/head -> origin/gh/ruisizhang123/5/head 2025-12-04T10:14:41.2364659Z * [new branch] gh/ruisizhang123/5/orig -> origin/gh/ruisizhang123/5/orig 2025-12-04T10:14:41.2364734Z * [new branch] gh/ruisizhang123/6/base -> origin/gh/ruisizhang123/6/base 2025-12-04T10:14:41.2364808Z * [new branch] gh/ruisizhang123/6/head -> origin/gh/ruisizhang123/6/head 2025-12-04T10:14:41.2364884Z * [new branch] gh/ruisizhang123/6/orig -> origin/gh/ruisizhang123/6/orig 2025-12-04T10:14:41.2364958Z * [new branch] gh/ruisizhang123/7/base -> origin/gh/ruisizhang123/7/base 2025-12-04T10:14:41.2365033Z * [new branch] gh/ruisizhang123/7/head -> origin/gh/ruisizhang123/7/head 2025-12-04T10:14:41.2365110Z * [new branch] gh/ruisizhang123/7/orig -> origin/gh/ruisizhang123/7/orig 2025-12-04T10:14:41.2365183Z * [new branch] gh/ruisizhang123/8/base -> origin/gh/ruisizhang123/8/base 2025-12-04T10:14:41.2365259Z * [new branch] gh/ruisizhang123/8/head -> origin/gh/ruisizhang123/8/head 2025-12-04T10:14:41.2365359Z * [new branch] gh/ruisizhang123/8/orig -> origin/gh/ruisizhang123/8/orig 2025-12-04T10:14:41.2365433Z * [new branch] gh/ruisizhang123/9/base -> origin/gh/ruisizhang123/9/base 2025-12-04T10:14:41.2365509Z * [new branch] gh/ruisizhang123/9/head -> origin/gh/ruisizhang123/9/head 2025-12-04T10:14:41.2365584Z * [new branch] gh/ruisizhang123/9/orig -> origin/gh/ruisizhang123/9/orig 2025-12-04T10:14:41.2365660Z * [new branch] gh/seemethere/52/base -> origin/gh/seemethere/52/base 2025-12-04T10:14:41.2365740Z * [new branch] gh/seemethere/52/head -> origin/gh/seemethere/52/head 2025-12-04T10:14:41.2365817Z * [new branch] gh/seemethere/52/orig -> origin/gh/seemethere/52/orig 2025-12-04T10:14:41.2365889Z * [new branch] gh/seemethere/53/base -> origin/gh/seemethere/53/base 2025-12-04T10:14:41.2365962Z * [new branch] gh/seemethere/53/head -> origin/gh/seemethere/53/head 2025-12-04T10:14:41.2366035Z * [new branch] gh/seemethere/53/orig -> origin/gh/seemethere/53/orig 2025-12-04T10:14:41.2366107Z * [new branch] gh/seemethere/54/base -> origin/gh/seemethere/54/base 2025-12-04T10:14:41.2366183Z * [new branch] gh/seemethere/54/head -> origin/gh/seemethere/54/head 2025-12-04T10:14:41.2366255Z * [new branch] gh/seemethere/54/orig -> origin/gh/seemethere/54/orig 2025-12-04T10:14:41.2366326Z * [new branch] gh/seemethere/55/base -> origin/gh/seemethere/55/base 2025-12-04T10:14:41.2366400Z * [new branch] gh/seemethere/55/head -> origin/gh/seemethere/55/head 2025-12-04T10:14:41.2366473Z * [new branch] gh/seemethere/55/orig -> origin/gh/seemethere/55/orig 2025-12-04T10:14:41.2366545Z * [new branch] gh/seemethere/59/base -> origin/gh/seemethere/59/base 2025-12-04T10:14:41.2366617Z * [new branch] gh/seemethere/59/head -> origin/gh/seemethere/59/head 2025-12-04T10:14:41.2366691Z * [new branch] gh/seemethere/59/orig -> origin/gh/seemethere/59/orig 2025-12-04T10:14:41.2366763Z * [new branch] gh/seemethere/62/base -> origin/gh/seemethere/62/base 2025-12-04T10:14:41.2366835Z * [new branch] gh/seemethere/62/head -> origin/gh/seemethere/62/head 2025-12-04T10:14:41.2366908Z * [new branch] gh/seemethere/62/orig -> origin/gh/seemethere/62/orig 2025-12-04T10:14:41.2366981Z * [new branch] gh/seemethere/63/base -> origin/gh/seemethere/63/base 2025-12-04T10:14:41.2367081Z * [new branch] gh/seemethere/63/head -> origin/gh/seemethere/63/head 2025-12-04T10:14:41.2367154Z * [new branch] gh/seemethere/63/orig -> origin/gh/seemethere/63/orig 2025-12-04T10:14:41.2367226Z * [new branch] gh/seemethere/71/base -> origin/gh/seemethere/71/base 2025-12-04T10:14:41.2367299Z * [new branch] gh/seemethere/71/head -> origin/gh/seemethere/71/head 2025-12-04T10:14:41.2367374Z * [new branch] gh/seemethere/71/orig -> origin/gh/seemethere/71/orig 2025-12-04T10:14:41.2367450Z * [new branch] gh/seemethere/72/base -> origin/gh/seemethere/72/base 2025-12-04T10:14:41.2367521Z * [new branch] gh/seemethere/72/head -> origin/gh/seemethere/72/head 2025-12-04T10:14:41.2367594Z * [new branch] gh/seemethere/72/orig -> origin/gh/seemethere/72/orig 2025-12-04T10:14:41.2367669Z * [new branch] gh/seemethere/73/base -> origin/gh/seemethere/73/base 2025-12-04T10:14:41.2367743Z * [new branch] gh/seemethere/73/head -> origin/gh/seemethere/73/head 2025-12-04T10:14:41.2367816Z * [new branch] gh/seemethere/73/orig -> origin/gh/seemethere/73/orig 2025-12-04T10:14:41.2367888Z * [new branch] gh/seemethere/74/base -> origin/gh/seemethere/74/base 2025-12-04T10:14:41.2367961Z * [new branch] gh/seemethere/74/head -> origin/gh/seemethere/74/head 2025-12-04T10:14:41.2368066Z * [new branch] gh/seemethere/74/orig -> origin/gh/seemethere/74/orig 2025-12-04T10:14:41.2368140Z * [new branch] gh/seemethere/75/base -> origin/gh/seemethere/75/base 2025-12-04T10:14:41.2368212Z * [new branch] gh/seemethere/75/head -> origin/gh/seemethere/75/head 2025-12-04T10:14:41.2368283Z * [new branch] gh/seemethere/75/orig -> origin/gh/seemethere/75/orig 2025-12-04T10:14:41.2368356Z * [new branch] gh/seemethere/76/base -> origin/gh/seemethere/76/base 2025-12-04T10:14:41.2368428Z * [new branch] gh/seemethere/76/head -> origin/gh/seemethere/76/head 2025-12-04T10:14:41.2368501Z * [new branch] gh/seemethere/76/orig -> origin/gh/seemethere/76/orig 2025-12-04T10:14:41.2368578Z * [new branch] gh/shunting314/145/base -> origin/gh/shunting314/145/base 2025-12-04T10:14:41.2368655Z * [new branch] gh/shunting314/145/head -> origin/gh/shunting314/145/head 2025-12-04T10:14:41.2368730Z * [new branch] gh/shunting314/145/orig -> origin/gh/shunting314/145/orig 2025-12-04T10:14:41.2368803Z * [new branch] gh/shunting314/176/base -> origin/gh/shunting314/176/base 2025-12-04T10:14:41.2368877Z * [new branch] gh/shunting314/176/head -> origin/gh/shunting314/176/head 2025-12-04T10:14:41.2368950Z * [new branch] gh/shunting314/176/orig -> origin/gh/shunting314/176/orig 2025-12-04T10:14:41.2369023Z * [new branch] gh/shunting314/249/base -> origin/gh/shunting314/249/base 2025-12-04T10:14:41.2369096Z * [new branch] gh/shunting314/249/head -> origin/gh/shunting314/249/head 2025-12-04T10:14:41.2369170Z * [new branch] gh/shunting314/249/orig -> origin/gh/shunting314/249/orig 2025-12-04T10:14:41.2369244Z * [new branch] gh/shunting314/253/base -> origin/gh/shunting314/253/base 2025-12-04T10:14:41.2369318Z * [new branch] gh/shunting314/253/head -> origin/gh/shunting314/253/head 2025-12-04T10:14:41.2369396Z * [new branch] gh/shunting314/253/orig -> origin/gh/shunting314/253/orig 2025-12-04T10:14:41.2369469Z * [new branch] gh/shunting314/256/base -> origin/gh/shunting314/256/base 2025-12-04T10:14:41.2369542Z * [new branch] gh/shunting314/256/head -> origin/gh/shunting314/256/head 2025-12-04T10:14:41.2369616Z * [new branch] gh/shunting314/256/orig -> origin/gh/shunting314/256/orig 2025-12-04T10:14:41.2369714Z * [new branch] gh/shunting314/257/base -> origin/gh/shunting314/257/base 2025-12-04T10:14:41.2369788Z * [new branch] gh/shunting314/257/head -> origin/gh/shunting314/257/head 2025-12-04T10:14:41.2369861Z * [new branch] gh/shunting314/257/orig -> origin/gh/shunting314/257/orig 2025-12-04T10:14:41.2369935Z * [new branch] gh/shunting314/258/base -> origin/gh/shunting314/258/base 2025-12-04T10:14:41.2370010Z * [new branch] gh/shunting314/258/head -> origin/gh/shunting314/258/head 2025-12-04T10:14:41.2370083Z * [new branch] gh/shunting314/258/orig -> origin/gh/shunting314/258/orig 2025-12-04T10:14:41.2370157Z * [new branch] gh/shunting314/259/base -> origin/gh/shunting314/259/base 2025-12-04T10:14:41.2370231Z * [new branch] gh/shunting314/259/head -> origin/gh/shunting314/259/head 2025-12-04T10:14:41.2370304Z * [new branch] gh/shunting314/259/orig -> origin/gh/shunting314/259/orig 2025-12-04T10:14:41.2370378Z * [new branch] gh/shunting314/260/base -> origin/gh/shunting314/260/base 2025-12-04T10:14:41.2370452Z * [new branch] gh/shunting314/260/head -> origin/gh/shunting314/260/head 2025-12-04T10:14:41.2370525Z * [new branch] gh/shunting314/260/orig -> origin/gh/shunting314/260/orig 2025-12-04T10:14:41.2370681Z * [new branch] gh/shunting314/261/base -> origin/gh/shunting314/261/base 2025-12-04T10:14:41.2370758Z * [new branch] gh/shunting314/261/head -> origin/gh/shunting314/261/head 2025-12-04T10:14:41.2370830Z * [new branch] gh/shunting314/261/orig -> origin/gh/shunting314/261/orig 2025-12-04T10:14:41.2370904Z * [new branch] gh/shunting314/262/base -> origin/gh/shunting314/262/base 2025-12-04T10:14:41.2370978Z * [new branch] gh/shunting314/262/head -> origin/gh/shunting314/262/head 2025-12-04T10:14:41.2371051Z * [new branch] gh/shunting314/262/orig -> origin/gh/shunting314/262/orig 2025-12-04T10:14:41.2371127Z * [new branch] gh/shunting314/263/base -> origin/gh/shunting314/263/base 2025-12-04T10:14:41.2371202Z * [new branch] gh/shunting314/263/head -> origin/gh/shunting314/263/head 2025-12-04T10:14:41.2371275Z * [new branch] gh/shunting314/263/orig -> origin/gh/shunting314/263/orig 2025-12-04T10:14:41.2371348Z * [new branch] gh/shunting314/264/base -> origin/gh/shunting314/264/base 2025-12-04T10:14:41.2371422Z * [new branch] gh/shunting314/264/head -> origin/gh/shunting314/264/head 2025-12-04T10:14:41.2371495Z * [new branch] gh/shunting314/264/orig -> origin/gh/shunting314/264/orig 2025-12-04T10:14:41.2371569Z * [new branch] gh/shunting314/265/base -> origin/gh/shunting314/265/base 2025-12-04T10:14:41.2371642Z * [new branch] gh/shunting314/265/head -> origin/gh/shunting314/265/head 2025-12-04T10:14:41.2371718Z * [new branch] gh/shunting314/265/orig -> origin/gh/shunting314/265/orig 2025-12-04T10:14:41.2371792Z * [new branch] gh/shunting314/266/base -> origin/gh/shunting314/266/base 2025-12-04T10:14:41.2371864Z * [new branch] gh/shunting314/266/head -> origin/gh/shunting314/266/head 2025-12-04T10:14:41.2371937Z * [new branch] gh/shunting314/266/orig -> origin/gh/shunting314/266/orig 2025-12-04T10:14:41.2372014Z * [new branch] gh/shunting314/267/base -> origin/gh/shunting314/267/base 2025-12-04T10:14:41.2372087Z * [new branch] gh/shunting314/267/head -> origin/gh/shunting314/267/head 2025-12-04T10:14:41.2372160Z * [new branch] gh/shunting314/267/orig -> origin/gh/shunting314/267/orig 2025-12-04T10:14:41.2372233Z * [new branch] gh/shunting314/268/base -> origin/gh/shunting314/268/base 2025-12-04T10:14:41.2372306Z * [new branch] gh/shunting314/268/head -> origin/gh/shunting314/268/head 2025-12-04T10:14:41.2372423Z * [new branch] gh/shunting314/268/orig -> origin/gh/shunting314/268/orig 2025-12-04T10:14:41.2372496Z * [new branch] gh/shunting314/269/base -> origin/gh/shunting314/269/base 2025-12-04T10:14:41.2372569Z * [new branch] gh/shunting314/269/head -> origin/gh/shunting314/269/head 2025-12-04T10:14:41.2372644Z * [new branch] gh/shunting314/269/orig -> origin/gh/shunting314/269/orig 2025-12-04T10:14:41.2372717Z * [new branch] gh/silverguo/1/base -> origin/gh/silverguo/1/base 2025-12-04T10:14:41.2372788Z * [new branch] gh/silverguo/1/head -> origin/gh/silverguo/1/head 2025-12-04T10:14:41.2372860Z * [new branch] gh/silverguo/2/base -> origin/gh/silverguo/2/base 2025-12-04T10:14:41.2372931Z * [new branch] gh/silverguo/2/head -> origin/gh/silverguo/2/head 2025-12-04T10:14:41.2373001Z * [new branch] gh/silverguo/3/base -> origin/gh/silverguo/3/base 2025-12-04T10:14:41.2373072Z * [new branch] gh/silverguo/3/head -> origin/gh/silverguo/3/head 2025-12-04T10:14:41.2373142Z * [new branch] gh/silverguo/4/base -> origin/gh/silverguo/4/base 2025-12-04T10:14:41.2373211Z * [new branch] gh/silverguo/4/head -> origin/gh/silverguo/4/head 2025-12-04T10:14:41.2373315Z * [new branch] gh/slayton58/39/base -> origin/gh/slayton58/39/base 2025-12-04T10:14:41.2373387Z * [new branch] gh/slayton58/39/head -> origin/gh/slayton58/39/head 2025-12-04T10:14:41.2373457Z * [new branch] gh/slayton58/39/orig -> origin/gh/slayton58/39/orig 2025-12-04T10:14:41.2373529Z * [new branch] gh/slayton58/42/base -> origin/gh/slayton58/42/base 2025-12-04T10:14:41.2373598Z * [new branch] gh/slayton58/42/head -> origin/gh/slayton58/42/head 2025-12-04T10:14:41.2373667Z * [new branch] gh/slayton58/42/orig -> origin/gh/slayton58/42/orig 2025-12-04T10:14:41.2373739Z * [new branch] gh/slayton58/43/base -> origin/gh/slayton58/43/base 2025-12-04T10:14:41.2373808Z * [new branch] gh/slayton58/43/head -> origin/gh/slayton58/43/head 2025-12-04T10:14:41.2373877Z * [new branch] gh/slayton58/43/orig -> origin/gh/slayton58/43/orig 2025-12-04T10:14:41.2373949Z * [new branch] gh/slayton58/44/base -> origin/gh/slayton58/44/base 2025-12-04T10:14:41.2374019Z * [new branch] gh/slayton58/44/head -> origin/gh/slayton58/44/head 2025-12-04T10:14:41.2374088Z * [new branch] gh/slayton58/44/orig -> origin/gh/slayton58/44/orig 2025-12-04T10:14:41.2374159Z * [new branch] gh/slayton58/45/base -> origin/gh/slayton58/45/base 2025-12-04T10:14:41.2374229Z * [new branch] gh/slayton58/45/head -> origin/gh/slayton58/45/head 2025-12-04T10:14:41.2374298Z * [new branch] gh/slayton58/45/orig -> origin/gh/slayton58/45/orig 2025-12-04T10:14:41.2374369Z * [new branch] gh/slayton58/46/base -> origin/gh/slayton58/46/base 2025-12-04T10:14:41.2374438Z * [new branch] gh/slayton58/46/head -> origin/gh/slayton58/46/head 2025-12-04T10:14:41.2374507Z * [new branch] gh/slayton58/46/orig -> origin/gh/slayton58/46/orig 2025-12-04T10:14:41.2374582Z * [new branch] gh/slayton58/6/base -> origin/gh/slayton58/6/base 2025-12-04T10:14:41.2374650Z * [new branch] gh/slayton58/6/head -> origin/gh/slayton58/6/head 2025-12-04T10:14:41.2374720Z * [new branch] gh/slayton58/7/base -> origin/gh/slayton58/7/base 2025-12-04T10:14:41.2374789Z * [new branch] gh/slayton58/7/head -> origin/gh/slayton58/7/head 2025-12-04T10:14:41.2374861Z * [new branch] gh/soulitzer/269/base -> origin/gh/soulitzer/269/base 2025-12-04T10:14:41.2374935Z * [new branch] gh/soulitzer/269/head -> origin/gh/soulitzer/269/head 2025-12-04T10:14:41.2375634Z * [new branch] gh/soulitzer/269/orig -> origin/gh/soulitzer/269/orig 2025-12-04T10:14:41.2375705Z * [new branch] gh/soulitzer/276/base -> origin/gh/soulitzer/276/base 2025-12-04T10:14:41.2375779Z * [new branch] gh/soulitzer/276/head -> origin/gh/soulitzer/276/head 2025-12-04T10:14:41.2375852Z * [new branch] gh/soulitzer/276/orig -> origin/gh/soulitzer/276/orig 2025-12-04T10:14:41.2375923Z * [new branch] gh/soulitzer/287/base -> origin/gh/soulitzer/287/base 2025-12-04T10:14:41.2375995Z * [new branch] gh/soulitzer/287/head -> origin/gh/soulitzer/287/head 2025-12-04T10:14:41.2376066Z * [new branch] gh/soulitzer/287/orig -> origin/gh/soulitzer/287/orig 2025-12-04T10:14:41.2376136Z * [new branch] gh/soulitzer/296/base -> origin/gh/soulitzer/296/base 2025-12-04T10:14:41.2376208Z * [new branch] gh/soulitzer/296/head -> origin/gh/soulitzer/296/head 2025-12-04T10:14:41.2376281Z * [new branch] gh/soulitzer/296/orig -> origin/gh/soulitzer/296/orig 2025-12-04T10:14:41.2376353Z * [new branch] gh/soulitzer/299/base -> origin/gh/soulitzer/299/base 2025-12-04T10:14:41.2376425Z * [new branch] gh/soulitzer/299/head -> origin/gh/soulitzer/299/head 2025-12-04T10:14:41.2376524Z * [new branch] gh/soulitzer/299/orig -> origin/gh/soulitzer/299/orig 2025-12-04T10:14:41.2376596Z * [new branch] gh/soulitzer/300/base -> origin/gh/soulitzer/300/base 2025-12-04T10:14:41.2376670Z * [new branch] gh/soulitzer/300/head -> origin/gh/soulitzer/300/head 2025-12-04T10:14:41.2376741Z * [new branch] gh/soulitzer/300/orig -> origin/gh/soulitzer/300/orig 2025-12-04T10:14:41.2376812Z * [new branch] gh/soulitzer/301/base -> origin/gh/soulitzer/301/base 2025-12-04T10:14:41.2376886Z * [new branch] gh/soulitzer/301/head -> origin/gh/soulitzer/301/head 2025-12-04T10:14:41.2376958Z * [new branch] gh/soulitzer/301/orig -> origin/gh/soulitzer/301/orig 2025-12-04T10:14:41.2377030Z * [new branch] gh/soulitzer/313/base -> origin/gh/soulitzer/313/base 2025-12-04T10:14:41.2377101Z * [new branch] gh/soulitzer/313/head -> origin/gh/soulitzer/313/head 2025-12-04T10:14:41.2377174Z * [new branch] gh/soulitzer/313/orig -> origin/gh/soulitzer/313/orig 2025-12-04T10:14:41.2377247Z * [new branch] gh/soulitzer/319/base -> origin/gh/soulitzer/319/base 2025-12-04T10:14:41.2377318Z * [new branch] gh/soulitzer/319/head -> origin/gh/soulitzer/319/head 2025-12-04T10:14:41.2377389Z * [new branch] gh/soulitzer/319/orig -> origin/gh/soulitzer/319/orig 2025-12-04T10:14:41.2377461Z * [new branch] gh/soulitzer/320/base -> origin/gh/soulitzer/320/base 2025-12-04T10:14:41.2377534Z * [new branch] gh/soulitzer/320/head -> origin/gh/soulitzer/320/head 2025-12-04T10:14:41.2377605Z * [new branch] gh/soulitzer/320/orig -> origin/gh/soulitzer/320/orig 2025-12-04T10:14:41.2377676Z * [new branch] gh/soulitzer/336/base -> origin/gh/soulitzer/336/base 2025-12-04T10:14:41.2377747Z * [new branch] gh/soulitzer/336/head -> origin/gh/soulitzer/336/head 2025-12-04T10:14:41.2377819Z * [new branch] gh/soulitzer/336/orig -> origin/gh/soulitzer/336/orig 2025-12-04T10:14:41.2377891Z * [new branch] gh/soulitzer/347/base -> origin/gh/soulitzer/347/base 2025-12-04T10:14:41.2377962Z * [new branch] gh/soulitzer/347/head -> origin/gh/soulitzer/347/head 2025-12-04T10:14:41.2378123Z * [new branch] gh/soulitzer/347/orig -> origin/gh/soulitzer/347/orig 2025-12-04T10:14:41.2378195Z * [new branch] gh/soulitzer/349/base -> origin/gh/soulitzer/349/base 2025-12-04T10:14:41.2378306Z * [new branch] gh/soulitzer/349/head -> origin/gh/soulitzer/349/head 2025-12-04T10:14:41.2378377Z * [new branch] gh/soulitzer/349/orig -> origin/gh/soulitzer/349/orig 2025-12-04T10:14:41.2378450Z * [new branch] gh/soulitzer/350/base -> origin/gh/soulitzer/350/base 2025-12-04T10:14:41.2378525Z * [new branch] gh/soulitzer/350/head -> origin/gh/soulitzer/350/head 2025-12-04T10:14:41.2378598Z * [new branch] gh/soulitzer/350/orig -> origin/gh/soulitzer/350/orig 2025-12-04T10:14:41.2378668Z * [new branch] gh/soulitzer/351/base -> origin/gh/soulitzer/351/base 2025-12-04T10:14:41.2378739Z * [new branch] gh/soulitzer/351/head -> origin/gh/soulitzer/351/head 2025-12-04T10:14:41.2378811Z * [new branch] gh/soulitzer/351/orig -> origin/gh/soulitzer/351/orig 2025-12-04T10:14:41.2378882Z * [new branch] gh/soulitzer/353/base -> origin/gh/soulitzer/353/base 2025-12-04T10:14:41.2378954Z * [new branch] gh/soulitzer/353/head -> origin/gh/soulitzer/353/head 2025-12-04T10:14:41.2379027Z * [new branch] gh/soulitzer/353/orig -> origin/gh/soulitzer/353/orig 2025-12-04T10:14:41.2379098Z * [new branch] gh/soulitzer/358/base -> origin/gh/soulitzer/358/base 2025-12-04T10:14:41.2379203Z * [new branch] gh/soulitzer/358/head -> origin/gh/soulitzer/358/head 2025-12-04T10:14:41.2379276Z * [new branch] gh/soulitzer/358/orig -> origin/gh/soulitzer/358/orig 2025-12-04T10:14:41.2379348Z * [new branch] gh/soulitzer/359/base -> origin/gh/soulitzer/359/base 2025-12-04T10:14:41.2379419Z * [new branch] gh/soulitzer/359/head -> origin/gh/soulitzer/359/head 2025-12-04T10:14:41.2379491Z * [new branch] gh/soulitzer/359/orig -> origin/gh/soulitzer/359/orig 2025-12-04T10:14:41.2379564Z * [new branch] gh/soulitzer/374/base -> origin/gh/soulitzer/374/base 2025-12-04T10:14:41.2379642Z * [new branch] gh/soulitzer/374/head -> origin/gh/soulitzer/374/head 2025-12-04T10:14:41.2379714Z * [new branch] gh/soulitzer/374/orig -> origin/gh/soulitzer/374/orig 2025-12-04T10:14:41.2379786Z * [new branch] gh/soulitzer/375/base -> origin/gh/soulitzer/375/base 2025-12-04T10:14:41.2379858Z * [new branch] gh/soulitzer/375/head -> origin/gh/soulitzer/375/head 2025-12-04T10:14:41.2379930Z * [new branch] gh/soulitzer/375/orig -> origin/gh/soulitzer/375/orig 2025-12-04T10:14:41.2380001Z * [new branch] gh/soulitzer/380/base -> origin/gh/soulitzer/380/base 2025-12-04T10:14:41.2380071Z * [new branch] gh/soulitzer/380/head -> origin/gh/soulitzer/380/head 2025-12-04T10:14:41.2380144Z * [new branch] gh/soulitzer/380/orig -> origin/gh/soulitzer/380/orig 2025-12-04T10:14:41.2380215Z * [new branch] gh/soulitzer/385/base -> origin/gh/soulitzer/385/base 2025-12-04T10:14:41.2380288Z * [new branch] gh/soulitzer/385/head -> origin/gh/soulitzer/385/head 2025-12-04T10:14:41.2380360Z * [new branch] gh/soulitzer/385/orig -> origin/gh/soulitzer/385/orig 2025-12-04T10:14:41.2380431Z * [new branch] gh/soulitzer/386/base -> origin/gh/soulitzer/386/base 2025-12-04T10:14:41.2380503Z * [new branch] gh/soulitzer/386/head -> origin/gh/soulitzer/386/head 2025-12-04T10:14:41.2380575Z * [new branch] gh/soulitzer/386/orig -> origin/gh/soulitzer/386/orig 2025-12-04T10:14:41.2380682Z * [new branch] gh/soulitzer/387/base -> origin/gh/soulitzer/387/base 2025-12-04T10:14:41.2380755Z * [new branch] gh/soulitzer/387/head -> origin/gh/soulitzer/387/head 2025-12-04T10:14:41.2380827Z * [new branch] gh/soulitzer/387/orig -> origin/gh/soulitzer/387/orig 2025-12-04T10:14:41.2380949Z * [new branch] gh/soulitzer/388/base -> origin/gh/soulitzer/388/base 2025-12-04T10:14:41.2381022Z * [new branch] gh/soulitzer/388/head -> origin/gh/soulitzer/388/head 2025-12-04T10:14:41.2381093Z * [new branch] gh/soulitzer/388/orig -> origin/gh/soulitzer/388/orig 2025-12-04T10:14:41.2381164Z * [new branch] gh/soulitzer/389/base -> origin/gh/soulitzer/389/base 2025-12-04T10:14:41.2381239Z * [new branch] gh/soulitzer/389/head -> origin/gh/soulitzer/389/head 2025-12-04T10:14:41.2381310Z * [new branch] gh/soulitzer/389/orig -> origin/gh/soulitzer/389/orig 2025-12-04T10:14:41.2381381Z * [new branch] gh/soulitzer/390/base -> origin/gh/soulitzer/390/base 2025-12-04T10:14:41.2381452Z * [new branch] gh/soulitzer/390/head -> origin/gh/soulitzer/390/head 2025-12-04T10:14:41.2381523Z * [new branch] gh/soulitzer/390/orig -> origin/gh/soulitzer/390/orig 2025-12-04T10:14:41.2381597Z * [new branch] gh/soulitzer/391/base -> origin/gh/soulitzer/391/base 2025-12-04T10:14:41.2381669Z * [new branch] gh/soulitzer/391/head -> origin/gh/soulitzer/391/head 2025-12-04T10:14:41.2381740Z * [new branch] gh/soulitzer/391/orig -> origin/gh/soulitzer/391/orig 2025-12-04T10:14:41.2381811Z * [new branch] gh/soulitzer/392/base -> origin/gh/soulitzer/392/base 2025-12-04T10:14:41.2381923Z * [new branch] gh/soulitzer/392/head -> origin/gh/soulitzer/392/head 2025-12-04T10:14:41.2381995Z * [new branch] gh/soulitzer/392/orig -> origin/gh/soulitzer/392/orig 2025-12-04T10:14:41.2382066Z * [new branch] gh/swolchok/728/next -> origin/gh/swolchok/728/next 2025-12-04T10:14:41.2382139Z * [new branch] gh/swolchok/819/base -> origin/gh/swolchok/819/base 2025-12-04T10:14:41.2382208Z * [new branch] gh/swolchok/819/head -> origin/gh/swolchok/819/head 2025-12-04T10:14:41.2382279Z * [new branch] gh/swolchok/819/orig -> origin/gh/swolchok/819/orig 2025-12-04T10:14:41.2382349Z * [new branch] gh/swolchok/824/base -> origin/gh/swolchok/824/base 2025-12-04T10:14:41.2382419Z * [new branch] gh/swolchok/824/head -> origin/gh/swolchok/824/head 2025-12-04T10:14:41.2382491Z * [new branch] gh/swolchok/824/orig -> origin/gh/swolchok/824/orig 2025-12-04T10:14:41.2382560Z * [new branch] gh/swolchok/829/base -> origin/gh/swolchok/829/base 2025-12-04T10:14:41.2382630Z * [new branch] gh/swolchok/829/head -> origin/gh/swolchok/829/head 2025-12-04T10:14:41.2382699Z * [new branch] gh/swolchok/829/orig -> origin/gh/swolchok/829/orig 2025-12-04T10:14:41.2382769Z * [new branch] gh/swolchok/839/base -> origin/gh/swolchok/839/base 2025-12-04T10:14:41.2382838Z * [new branch] gh/swolchok/839/head -> origin/gh/swolchok/839/head 2025-12-04T10:14:41.2382909Z * [new branch] gh/swolchok/839/orig -> origin/gh/swolchok/839/orig 2025-12-04T10:14:41.2382979Z * [new branch] gh/swolchok/841/base -> origin/gh/swolchok/841/base 2025-12-04T10:14:41.2383048Z * [new branch] gh/swolchok/841/head -> origin/gh/swolchok/841/head 2025-12-04T10:14:41.2383120Z * [new branch] gh/swolchok/841/orig -> origin/gh/swolchok/841/orig 2025-12-04T10:14:41.2383188Z * [new branch] gh/swolchok/842/base -> origin/gh/swolchok/842/base 2025-12-04T10:14:41.2383258Z * [new branch] gh/swolchok/842/head -> origin/gh/swolchok/842/head 2025-12-04T10:14:41.2383328Z * [new branch] gh/swolchok/842/orig -> origin/gh/swolchok/842/orig 2025-12-04T10:14:41.2383398Z * [new branch] gh/swolchok/845/base -> origin/gh/swolchok/845/base 2025-12-04T10:14:41.2383467Z * [new branch] gh/swolchok/845/head -> origin/gh/swolchok/845/head 2025-12-04T10:14:41.2383561Z * [new branch] gh/swolchok/845/orig -> origin/gh/swolchok/845/orig 2025-12-04T10:14:41.2383631Z * [new branch] gh/swolchok/848/base -> origin/gh/swolchok/848/base 2025-12-04T10:14:41.2383700Z * [new branch] gh/swolchok/848/head -> origin/gh/swolchok/848/head 2025-12-04T10:14:41.2383771Z * [new branch] gh/swolchok/848/orig -> origin/gh/swolchok/848/orig 2025-12-04T10:14:41.2383840Z * [new branch] gh/swolchok/856/base -> origin/gh/swolchok/856/base 2025-12-04T10:14:41.2383911Z * [new branch] gh/swolchok/856/head -> origin/gh/swolchok/856/head 2025-12-04T10:14:41.2383980Z * [new branch] gh/swolchok/856/orig -> origin/gh/swolchok/856/orig 2025-12-04T10:14:41.2384049Z * [new branch] gh/swolchok/860/base -> origin/gh/swolchok/860/base 2025-12-04T10:14:41.2384119Z * [new branch] gh/swolchok/860/head -> origin/gh/swolchok/860/head 2025-12-04T10:14:41.2384189Z * [new branch] gh/swolchok/860/orig -> origin/gh/swolchok/860/orig 2025-12-04T10:14:41.2384259Z * [new branch] gh/swolchok/861/base -> origin/gh/swolchok/861/base 2025-12-04T10:14:41.2384328Z * [new branch] gh/swolchok/861/head -> origin/gh/swolchok/861/head 2025-12-04T10:14:41.2384429Z * [new branch] gh/swolchok/861/orig -> origin/gh/swolchok/861/orig 2025-12-04T10:14:41.2384499Z * [new branch] gh/swolchok/862/base -> origin/gh/swolchok/862/base 2025-12-04T10:14:41.2384568Z * [new branch] gh/swolchok/862/head -> origin/gh/swolchok/862/head 2025-12-04T10:14:41.2384637Z * [new branch] gh/swolchok/862/orig -> origin/gh/swolchok/862/orig 2025-12-04T10:14:41.2384707Z * [new branch] gh/swolchok/863/base -> origin/gh/swolchok/863/base 2025-12-04T10:14:41.2384776Z * [new branch] gh/swolchok/863/head -> origin/gh/swolchok/863/head 2025-12-04T10:14:41.2384847Z * [new branch] gh/swolchok/863/orig -> origin/gh/swolchok/863/orig 2025-12-04T10:14:41.2384916Z * [new branch] gh/swolchok/864/base -> origin/gh/swolchok/864/base 2025-12-04T10:14:41.2384986Z * [new branch] gh/swolchok/864/head -> origin/gh/swolchok/864/head 2025-12-04T10:14:41.2385056Z * [new branch] gh/swolchok/864/orig -> origin/gh/swolchok/864/orig 2025-12-04T10:14:41.2385126Z * [new branch] gh/swolchok/865/base -> origin/gh/swolchok/865/base 2025-12-04T10:14:41.2385195Z * [new branch] gh/swolchok/865/head -> origin/gh/swolchok/865/head 2025-12-04T10:14:41.2385264Z * [new branch] gh/swolchok/865/orig -> origin/gh/swolchok/865/orig 2025-12-04T10:14:41.2385335Z * [new branch] gh/swolchok/866/base -> origin/gh/swolchok/866/base 2025-12-04T10:14:41.2385405Z * [new branch] gh/swolchok/866/head -> origin/gh/swolchok/866/head 2025-12-04T10:14:41.2385475Z * [new branch] gh/swolchok/866/orig -> origin/gh/swolchok/866/orig 2025-12-04T10:14:41.2385545Z * [new branch] gh/swolchok/867/base -> origin/gh/swolchok/867/base 2025-12-04T10:14:41.2385614Z * [new branch] gh/swolchok/867/head -> origin/gh/swolchok/867/head 2025-12-04T10:14:41.2385684Z * [new branch] gh/swolchok/867/orig -> origin/gh/swolchok/867/orig 2025-12-04T10:14:41.2385755Z * [new branch] gh/swolchok/868/base -> origin/gh/swolchok/868/base 2025-12-04T10:14:41.2385824Z * [new branch] gh/swolchok/868/head -> origin/gh/swolchok/868/head 2025-12-04T10:14:41.2385893Z * [new branch] gh/swolchok/868/orig -> origin/gh/swolchok/868/orig 2025-12-04T10:14:41.2385963Z * [new branch] gh/swolchok/869/base -> origin/gh/swolchok/869/base 2025-12-04T10:14:41.2386059Z * [new branch] gh/swolchok/869/head -> origin/gh/swolchok/869/head 2025-12-04T10:14:41.2386129Z * [new branch] gh/swolchok/869/orig -> origin/gh/swolchok/869/orig 2025-12-04T10:14:41.2386200Z * [new branch] gh/swolchok/870/base -> origin/gh/swolchok/870/base 2025-12-04T10:14:41.2386269Z * [new branch] gh/swolchok/870/head -> origin/gh/swolchok/870/head 2025-12-04T10:14:41.2386340Z * [new branch] gh/swolchok/870/orig -> origin/gh/swolchok/870/orig 2025-12-04T10:14:41.2386411Z * [new branch] gh/swolchok/871/base -> origin/gh/swolchok/871/base 2025-12-04T10:14:41.2386480Z * [new branch] gh/swolchok/871/head -> origin/gh/swolchok/871/head 2025-12-04T10:14:41.2386549Z * [new branch] gh/swolchok/871/orig -> origin/gh/swolchok/871/orig 2025-12-04T10:14:41.2386622Z * [new branch] gh/teja-rao/4/base -> origin/gh/teja-rao/4/base 2025-12-04T10:14:41.2386694Z * [new branch] gh/teja-rao/4/head -> origin/gh/teja-rao/4/head 2025-12-04T10:14:41.2386764Z * [new branch] gh/teja-rao/4/orig -> origin/gh/teja-rao/4/orig 2025-12-04T10:14:41.2386833Z * [new branch] gh/tianyu-l/2/base -> origin/gh/tianyu-l/2/base 2025-12-04T10:14:41.2386901Z * [new branch] gh/tianyu-l/2/head -> origin/gh/tianyu-l/2/head 2025-12-04T10:14:41.2386995Z * [new branch] gh/tianyu-l/2/orig -> origin/gh/tianyu-l/2/orig 2025-12-04T10:14:41.2387063Z * [new branch] gh/tianyu-l/3/base -> origin/gh/tianyu-l/3/base 2025-12-04T10:14:41.2387130Z * [new branch] gh/tianyu-l/3/orig -> origin/gh/tianyu-l/3/orig 2025-12-04T10:14:41.2387198Z * [new branch] gh/tianyu-l/4/base -> origin/gh/tianyu-l/4/base 2025-12-04T10:14:41.2387265Z * [new branch] gh/tianyu-l/4/head -> origin/gh/tianyu-l/4/head 2025-12-04T10:14:41.2387334Z * [new branch] gh/tianyu-l/4/orig -> origin/gh/tianyu-l/4/orig 2025-12-04T10:14:41.2387425Z * [new branch] gh/tugsbayasgalan/10/base -> origin/gh/tugsbayasgalan/10/base 2025-12-04T10:14:41.2387509Z * [new branch] gh/tugsbayasgalan/10/head -> origin/gh/tugsbayasgalan/10/head 2025-12-04T10:14:41.2387591Z * [new branch] gh/tugsbayasgalan/10/orig -> origin/gh/tugsbayasgalan/10/orig 2025-12-04T10:14:41.2387676Z * [new branch] gh/tugsbayasgalan/13/base -> origin/gh/tugsbayasgalan/13/base 2025-12-04T10:14:41.2387757Z * [new branch] gh/tugsbayasgalan/13/head -> origin/gh/tugsbayasgalan/13/head 2025-12-04T10:14:41.2387838Z * [new branch] gh/tugsbayasgalan/13/orig -> origin/gh/tugsbayasgalan/13/orig 2025-12-04T10:14:41.2387920Z * [new branch] gh/tugsbayasgalan/17/base -> origin/gh/tugsbayasgalan/17/base 2025-12-04T10:14:41.2388000Z * [new branch] gh/tugsbayasgalan/17/head -> origin/gh/tugsbayasgalan/17/head 2025-12-04T10:14:41.2388082Z * [new branch] gh/tugsbayasgalan/17/orig -> origin/gh/tugsbayasgalan/17/orig 2025-12-04T10:14:41.2388166Z * [new branch] gh/tugsbayasgalan/2/base -> origin/gh/tugsbayasgalan/2/base 2025-12-04T10:14:41.2388246Z * [new branch] gh/tugsbayasgalan/2/head -> origin/gh/tugsbayasgalan/2/head 2025-12-04T10:14:41.2388328Z * [new branch] gh/tugsbayasgalan/2/orig -> origin/gh/tugsbayasgalan/2/orig 2025-12-04T10:14:41.2388410Z * [new branch] gh/tugsbayasgalan/28/base -> origin/gh/tugsbayasgalan/28/base 2025-12-04T10:14:41.2388491Z * [new branch] gh/tugsbayasgalan/28/head -> origin/gh/tugsbayasgalan/28/head 2025-12-04T10:14:41.2388573Z * [new branch] gh/tugsbayasgalan/28/orig -> origin/gh/tugsbayasgalan/28/orig 2025-12-04T10:14:41.2388653Z * [new branch] gh/tugsbayasgalan/32/base -> origin/gh/tugsbayasgalan/32/base 2025-12-04T10:14:41.2388764Z * [new branch] gh/tugsbayasgalan/32/head -> origin/gh/tugsbayasgalan/32/head 2025-12-04T10:14:41.2388846Z * [new branch] gh/tugsbayasgalan/32/orig -> origin/gh/tugsbayasgalan/32/orig 2025-12-04T10:14:41.2388926Z * [new branch] gh/tugsbayasgalan/35/base -> origin/gh/tugsbayasgalan/35/base 2025-12-04T10:14:41.2389007Z * [new branch] gh/tugsbayasgalan/35/head -> origin/gh/tugsbayasgalan/35/head 2025-12-04T10:14:41.2389088Z * [new branch] gh/tugsbayasgalan/35/orig -> origin/gh/tugsbayasgalan/35/orig 2025-12-04T10:14:41.2389169Z * [new branch] gh/tugsbayasgalan/36/base -> origin/gh/tugsbayasgalan/36/base 2025-12-04T10:14:41.2389250Z * [new branch] gh/tugsbayasgalan/36/head -> origin/gh/tugsbayasgalan/36/head 2025-12-04T10:14:41.2389331Z * [new branch] gh/tugsbayasgalan/36/orig -> origin/gh/tugsbayasgalan/36/orig 2025-12-04T10:14:41.2389411Z * [new branch] gh/tugsbayasgalan/37/base -> origin/gh/tugsbayasgalan/37/base 2025-12-04T10:14:41.2389493Z * [new branch] gh/tugsbayasgalan/37/head -> origin/gh/tugsbayasgalan/37/head 2025-12-04T10:14:41.2389574Z * [new branch] gh/tugsbayasgalan/37/orig -> origin/gh/tugsbayasgalan/37/orig 2025-12-04T10:14:41.2389654Z * [new branch] gh/tugsbayasgalan/43/base -> origin/gh/tugsbayasgalan/43/base 2025-12-04T10:14:41.2389762Z * [new branch] gh/tugsbayasgalan/43/head -> origin/gh/tugsbayasgalan/43/head 2025-12-04T10:14:41.2389845Z * [new branch] gh/tugsbayasgalan/43/orig -> origin/gh/tugsbayasgalan/43/orig 2025-12-04T10:14:41.2389926Z * [new branch] gh/tugsbayasgalan/48/base -> origin/gh/tugsbayasgalan/48/base 2025-12-04T10:14:41.2390008Z * [new branch] gh/tugsbayasgalan/48/head -> origin/gh/tugsbayasgalan/48/head 2025-12-04T10:14:41.2390089Z * [new branch] gh/tugsbayasgalan/48/orig -> origin/gh/tugsbayasgalan/48/orig 2025-12-04T10:14:41.2390171Z * [new branch] gh/tugsbayasgalan/51/base -> origin/gh/tugsbayasgalan/51/base 2025-12-04T10:14:41.2390253Z * [new branch] gh/tugsbayasgalan/51/head -> origin/gh/tugsbayasgalan/51/head 2025-12-04T10:14:41.2390334Z * [new branch] gh/tugsbayasgalan/51/orig -> origin/gh/tugsbayasgalan/51/orig 2025-12-04T10:14:41.2390416Z * [new branch] gh/tugsbayasgalan/52/base -> origin/gh/tugsbayasgalan/52/base 2025-12-04T10:14:41.2390497Z * [new branch] gh/tugsbayasgalan/52/head -> origin/gh/tugsbayasgalan/52/head 2025-12-04T10:14:41.2390578Z * [new branch] gh/tugsbayasgalan/52/orig -> origin/gh/tugsbayasgalan/52/orig 2025-12-04T10:14:41.2390707Z * [new branch] gh/tugsbayasgalan/53/base -> origin/gh/tugsbayasgalan/53/base 2025-12-04T10:14:41.2390791Z * [new branch] gh/tugsbayasgalan/53/head -> origin/gh/tugsbayasgalan/53/head 2025-12-04T10:14:41.2390872Z * [new branch] gh/tugsbayasgalan/53/orig -> origin/gh/tugsbayasgalan/53/orig 2025-12-04T10:14:41.2390954Z * [new branch] gh/tugsbayasgalan/55/base -> origin/gh/tugsbayasgalan/55/base 2025-12-04T10:14:41.2391036Z * [new branch] gh/tugsbayasgalan/55/head -> origin/gh/tugsbayasgalan/55/head 2025-12-04T10:14:41.2391117Z * [new branch] gh/tugsbayasgalan/55/orig -> origin/gh/tugsbayasgalan/55/orig 2025-12-04T10:14:41.2391198Z * [new branch] gh/tugsbayasgalan/59/base -> origin/gh/tugsbayasgalan/59/base 2025-12-04T10:14:41.2391280Z * [new branch] gh/tugsbayasgalan/59/head -> origin/gh/tugsbayasgalan/59/head 2025-12-04T10:14:41.2391360Z * [new branch] gh/tugsbayasgalan/59/orig -> origin/gh/tugsbayasgalan/59/orig 2025-12-04T10:14:41.2391441Z * [new branch] gh/tugsbayasgalan/6/base -> origin/gh/tugsbayasgalan/6/base 2025-12-04T10:14:41.2391521Z * [new branch] gh/tugsbayasgalan/6/head -> origin/gh/tugsbayasgalan/6/head 2025-12-04T10:14:41.2391640Z * [new branch] gh/tugsbayasgalan/6/orig -> origin/gh/tugsbayasgalan/6/orig 2025-12-04T10:14:41.2391722Z * [new branch] gh/tugsbayasgalan/60/base -> origin/gh/tugsbayasgalan/60/base 2025-12-04T10:14:41.2391802Z * [new branch] gh/tugsbayasgalan/60/head -> origin/gh/tugsbayasgalan/60/head 2025-12-04T10:14:41.2391886Z * [new branch] gh/tugsbayasgalan/60/orig -> origin/gh/tugsbayasgalan/60/orig 2025-12-04T10:14:41.2391967Z * [new branch] gh/tugsbayasgalan/61/base -> origin/gh/tugsbayasgalan/61/base 2025-12-04T10:14:41.2392048Z * [new branch] gh/tugsbayasgalan/61/head -> origin/gh/tugsbayasgalan/61/head 2025-12-04T10:14:41.2392128Z * [new branch] gh/tugsbayasgalan/61/orig -> origin/gh/tugsbayasgalan/61/orig 2025-12-04T10:14:41.2392209Z * [new branch] gh/tugsbayasgalan/63/base -> origin/gh/tugsbayasgalan/63/base 2025-12-04T10:14:41.2392292Z * [new branch] gh/tugsbayasgalan/63/head -> origin/gh/tugsbayasgalan/63/head 2025-12-04T10:14:41.2392372Z * [new branch] gh/tugsbayasgalan/63/orig -> origin/gh/tugsbayasgalan/63/orig 2025-12-04T10:14:41.2392453Z * [new branch] gh/tugsbayasgalan/67/base -> origin/gh/tugsbayasgalan/67/base 2025-12-04T10:14:41.2392534Z * [new branch] gh/tugsbayasgalan/67/head -> origin/gh/tugsbayasgalan/67/head 2025-12-04T10:14:41.2392651Z * [new branch] gh/tugsbayasgalan/67/orig -> origin/gh/tugsbayasgalan/67/orig 2025-12-04T10:14:41.2392732Z * [new branch] gh/tugsbayasgalan/68/base -> origin/gh/tugsbayasgalan/68/base 2025-12-04T10:14:41.2392813Z * [new branch] gh/tugsbayasgalan/68/head -> origin/gh/tugsbayasgalan/68/head 2025-12-04T10:14:41.2392893Z * [new branch] gh/tugsbayasgalan/68/orig -> origin/gh/tugsbayasgalan/68/orig 2025-12-04T10:14:41.2392975Z * [new branch] gh/tugsbayasgalan/7/base -> origin/gh/tugsbayasgalan/7/base 2025-12-04T10:14:41.2393057Z * [new branch] gh/tugsbayasgalan/7/head -> origin/gh/tugsbayasgalan/7/head 2025-12-04T10:14:41.2393138Z * [new branch] gh/tugsbayasgalan/7/orig -> origin/gh/tugsbayasgalan/7/orig 2025-12-04T10:14:41.2393219Z * [new branch] gh/tugsbayasgalan/70/base -> origin/gh/tugsbayasgalan/70/base 2025-12-04T10:14:41.2393301Z * [new branch] gh/tugsbayasgalan/70/head -> origin/gh/tugsbayasgalan/70/head 2025-12-04T10:14:41.2393382Z * [new branch] gh/tugsbayasgalan/70/orig -> origin/gh/tugsbayasgalan/70/orig 2025-12-04T10:14:41.2393463Z * [new branch] gh/tugsbayasgalan/71/base -> origin/gh/tugsbayasgalan/71/base 2025-12-04T10:14:41.2393543Z * [new branch] gh/tugsbayasgalan/71/head -> origin/gh/tugsbayasgalan/71/head 2025-12-04T10:14:41.2393625Z * [new branch] gh/tugsbayasgalan/71/orig -> origin/gh/tugsbayasgalan/71/orig 2025-12-04T10:14:41.2393707Z * [new branch] gh/tugsbayasgalan/72/base -> origin/gh/tugsbayasgalan/72/base 2025-12-04T10:14:41.2393787Z * [new branch] gh/tugsbayasgalan/72/head -> origin/gh/tugsbayasgalan/72/head 2025-12-04T10:14:41.2393868Z * [new branch] gh/tugsbayasgalan/72/orig -> origin/gh/tugsbayasgalan/72/orig 2025-12-04T10:14:41.2393950Z * [new branch] gh/tugsbayasgalan/73/base -> origin/gh/tugsbayasgalan/73/base 2025-12-04T10:14:41.2394030Z * [new branch] gh/tugsbayasgalan/73/head -> origin/gh/tugsbayasgalan/73/head 2025-12-04T10:14:41.2394112Z * [new branch] gh/tugsbayasgalan/73/orig -> origin/gh/tugsbayasgalan/73/orig 2025-12-04T10:14:41.2394193Z * [new branch] gh/tugsbayasgalan/74/base -> origin/gh/tugsbayasgalan/74/base 2025-12-04T10:14:41.2394274Z * [new branch] gh/tugsbayasgalan/74/head -> origin/gh/tugsbayasgalan/74/head 2025-12-04T10:14:41.2394356Z * [new branch] gh/tugsbayasgalan/74/orig -> origin/gh/tugsbayasgalan/74/orig 2025-12-04T10:14:41.2394466Z * [new branch] gh/tugsbayasgalan/75/base -> origin/gh/tugsbayasgalan/75/base 2025-12-04T10:14:41.2394547Z * [new branch] gh/tugsbayasgalan/75/head -> origin/gh/tugsbayasgalan/75/head 2025-12-04T10:14:41.2394628Z * [new branch] gh/tugsbayasgalan/75/orig -> origin/gh/tugsbayasgalan/75/orig 2025-12-04T10:14:41.2394711Z * [new branch] gh/tugsbayasgalan/76/base -> origin/gh/tugsbayasgalan/76/base 2025-12-04T10:14:41.2394792Z * [new branch] gh/tugsbayasgalan/76/head -> origin/gh/tugsbayasgalan/76/head 2025-12-04T10:14:41.2394872Z * [new branch] gh/tugsbayasgalan/76/orig -> origin/gh/tugsbayasgalan/76/orig 2025-12-04T10:14:41.2394953Z * [new branch] gh/tugsbayasgalan/77/base -> origin/gh/tugsbayasgalan/77/base 2025-12-04T10:14:41.2395034Z * [new branch] gh/tugsbayasgalan/77/head -> origin/gh/tugsbayasgalan/77/head 2025-12-04T10:14:41.2395117Z * [new branch] gh/tugsbayasgalan/77/orig -> origin/gh/tugsbayasgalan/77/orig 2025-12-04T10:14:41.2395197Z * [new branch] gh/tugsbayasgalan/78/base -> origin/gh/tugsbayasgalan/78/base 2025-12-04T10:14:41.2395279Z * [new branch] gh/tugsbayasgalan/78/head -> origin/gh/tugsbayasgalan/78/head 2025-12-04T10:14:41.2395387Z * [new branch] gh/tugsbayasgalan/78/orig -> origin/gh/tugsbayasgalan/78/orig 2025-12-04T10:14:41.2395468Z * [new branch] gh/tugsbayasgalan/79/base -> origin/gh/tugsbayasgalan/79/base 2025-12-04T10:14:41.2395550Z * [new branch] gh/tugsbayasgalan/79/head -> origin/gh/tugsbayasgalan/79/head 2025-12-04T10:14:41.2395631Z * [new branch] gh/tugsbayasgalan/79/orig -> origin/gh/tugsbayasgalan/79/orig 2025-12-04T10:14:41.2395710Z * [new branch] gh/tugsbayasgalan/8/base -> origin/gh/tugsbayasgalan/8/base 2025-12-04T10:14:41.2395791Z * [new branch] gh/tugsbayasgalan/8/head -> origin/gh/tugsbayasgalan/8/head 2025-12-04T10:14:41.2395870Z * [new branch] gh/tugsbayasgalan/8/orig -> origin/gh/tugsbayasgalan/8/orig 2025-12-04T10:14:41.2395952Z * [new branch] gh/tugsbayasgalan/80/base -> origin/gh/tugsbayasgalan/80/base 2025-12-04T10:14:41.2396033Z * [new branch] gh/tugsbayasgalan/80/head -> origin/gh/tugsbayasgalan/80/head 2025-12-04T10:14:41.2396118Z * [new branch] gh/tugsbayasgalan/80/orig -> origin/gh/tugsbayasgalan/80/orig 2025-12-04T10:14:41.2396200Z * [new branch] gh/tugsbayasgalan/81/base -> origin/gh/tugsbayasgalan/81/base 2025-12-04T10:14:41.2396281Z * [new branch] gh/tugsbayasgalan/81/head -> origin/gh/tugsbayasgalan/81/head 2025-12-04T10:14:41.2396361Z * [new branch] gh/tugsbayasgalan/81/orig -> origin/gh/tugsbayasgalan/81/orig 2025-12-04T10:14:41.2396442Z * [new branch] gh/tugsbayasgalan/82/base -> origin/gh/tugsbayasgalan/82/base 2025-12-04T10:14:41.2396525Z * [new branch] gh/tugsbayasgalan/82/head -> origin/gh/tugsbayasgalan/82/head 2025-12-04T10:14:41.2396605Z * [new branch] gh/tugsbayasgalan/82/orig -> origin/gh/tugsbayasgalan/82/orig 2025-12-04T10:14:41.2396686Z * [new branch] gh/tugsbayasgalan/83/base -> origin/gh/tugsbayasgalan/83/base 2025-12-04T10:14:41.2396768Z * [new branch] gh/tugsbayasgalan/83/head -> origin/gh/tugsbayasgalan/83/head 2025-12-04T10:14:41.2396848Z * [new branch] gh/tugsbayasgalan/83/orig -> origin/gh/tugsbayasgalan/83/orig 2025-12-04T10:14:41.2396929Z * [new branch] gh/tugsbayasgalan/84/base -> origin/gh/tugsbayasgalan/84/base 2025-12-04T10:14:41.2397009Z * [new branch] gh/tugsbayasgalan/84/head -> origin/gh/tugsbayasgalan/84/head 2025-12-04T10:14:41.2397090Z * [new branch] gh/tugsbayasgalan/84/orig -> origin/gh/tugsbayasgalan/84/orig 2025-12-04T10:14:41.2397194Z * [new branch] gh/tugsbayasgalan/85/base -> origin/gh/tugsbayasgalan/85/base 2025-12-04T10:14:41.2397275Z * [new branch] gh/tugsbayasgalan/85/head -> origin/gh/tugsbayasgalan/85/head 2025-12-04T10:14:41.2397355Z * [new branch] gh/tugsbayasgalan/85/orig -> origin/gh/tugsbayasgalan/85/orig 2025-12-04T10:14:41.2397437Z * [new branch] gh/tugsbayasgalan/86/base -> origin/gh/tugsbayasgalan/86/base 2025-12-04T10:14:41.2397518Z * [new branch] gh/tugsbayasgalan/86/head -> origin/gh/tugsbayasgalan/86/head 2025-12-04T10:14:41.2397600Z * [new branch] gh/tugsbayasgalan/86/orig -> origin/gh/tugsbayasgalan/86/orig 2025-12-04T10:14:41.2397681Z * [new branch] gh/tugsbayasgalan/87/base -> origin/gh/tugsbayasgalan/87/base 2025-12-04T10:14:41.2397762Z * [new branch] gh/tugsbayasgalan/87/head -> origin/gh/tugsbayasgalan/87/head 2025-12-04T10:14:41.2397919Z * [new branch] gh/tugsbayasgalan/87/orig -> origin/gh/tugsbayasgalan/87/orig 2025-12-04T10:14:41.2398002Z * [new branch] gh/tugsbayasgalan/88/base -> origin/gh/tugsbayasgalan/88/base 2025-12-04T10:14:41.2398083Z * [new branch] gh/tugsbayasgalan/88/head -> origin/gh/tugsbayasgalan/88/head 2025-12-04T10:14:41.2398165Z * [new branch] gh/tugsbayasgalan/88/orig -> origin/gh/tugsbayasgalan/88/orig 2025-12-04T10:14:41.2398274Z * [new branch] gh/tugsbayasgalan/89/base -> origin/gh/tugsbayasgalan/89/base 2025-12-04T10:14:41.2398355Z * [new branch] gh/tugsbayasgalan/89/head -> origin/gh/tugsbayasgalan/89/head 2025-12-04T10:14:41.2398437Z * [new branch] gh/tugsbayasgalan/89/orig -> origin/gh/tugsbayasgalan/89/orig 2025-12-04T10:14:41.2398516Z * [new branch] gh/tugsbayasgalan/9/base -> origin/gh/tugsbayasgalan/9/base 2025-12-04T10:14:41.2398595Z * [new branch] gh/tugsbayasgalan/9/head -> origin/gh/tugsbayasgalan/9/head 2025-12-04T10:14:41.2398676Z * [new branch] gh/tugsbayasgalan/9/orig -> origin/gh/tugsbayasgalan/9/orig 2025-12-04T10:14:41.2398757Z * [new branch] gh/tugsbayasgalan/90/base -> origin/gh/tugsbayasgalan/90/base 2025-12-04T10:14:41.2398838Z * [new branch] gh/tugsbayasgalan/90/head -> origin/gh/tugsbayasgalan/90/head 2025-12-04T10:14:41.2398923Z * [new branch] gh/tugsbayasgalan/90/orig -> origin/gh/tugsbayasgalan/90/orig 2025-12-04T10:14:41.2399003Z * [new branch] gh/tugsbayasgalan/91/base -> origin/gh/tugsbayasgalan/91/base 2025-12-04T10:14:41.2399084Z * [new branch] gh/tugsbayasgalan/91/head -> origin/gh/tugsbayasgalan/91/head 2025-12-04T10:14:41.2399167Z * [new branch] gh/tugsbayasgalan/91/orig -> origin/gh/tugsbayasgalan/91/orig 2025-12-04T10:14:41.2399250Z * [new branch] gh/tugsbayasgalan/92/base -> origin/gh/tugsbayasgalan/92/base 2025-12-04T10:14:41.2399335Z * [new branch] gh/tugsbayasgalan/92/head -> origin/gh/tugsbayasgalan/92/head 2025-12-04T10:14:41.2399414Z * [new branch] gh/tugsbayasgalan/92/orig -> origin/gh/tugsbayasgalan/92/orig 2025-12-04T10:14:41.2399494Z * [new branch] gh/tugsbayasgalan/93/base -> origin/gh/tugsbayasgalan/93/base 2025-12-04T10:14:41.2399575Z * [new branch] gh/tugsbayasgalan/93/head -> origin/gh/tugsbayasgalan/93/head 2025-12-04T10:14:41.2399658Z * [new branch] gh/tugsbayasgalan/93/orig -> origin/gh/tugsbayasgalan/93/orig 2025-12-04T10:14:41.2399726Z * [new branch] gh/v0i0/14/base -> origin/gh/v0i0/14/base 2025-12-04T10:14:41.2399793Z * [new branch] gh/v0i0/14/head -> origin/gh/v0i0/14/head 2025-12-04T10:14:41.2399857Z * [new branch] gh/v0i0/14/orig -> origin/gh/v0i0/14/orig 2025-12-04T10:14:41.2399919Z * [new branch] gh/v0i0/15/base -> origin/gh/v0i0/15/base 2025-12-04T10:14:41.2400009Z * [new branch] gh/v0i0/15/head -> origin/gh/v0i0/15/head 2025-12-04T10:14:41.2400070Z * [new branch] gh/v0i0/15/orig -> origin/gh/v0i0/15/orig 2025-12-04T10:14:41.2400131Z * [new branch] gh/v0i0/16/base -> origin/gh/v0i0/16/base 2025-12-04T10:14:41.2400194Z * [new branch] gh/v0i0/16/head -> origin/gh/v0i0/16/head 2025-12-04T10:14:41.2400257Z * [new branch] gh/v0i0/16/orig -> origin/gh/v0i0/16/orig 2025-12-04T10:14:41.2400319Z * [new branch] gh/v0i0/17/base -> origin/gh/v0i0/17/base 2025-12-04T10:14:41.2400381Z * [new branch] gh/v0i0/17/head -> origin/gh/v0i0/17/head 2025-12-04T10:14:41.2400443Z * [new branch] gh/v0i0/17/orig -> origin/gh/v0i0/17/orig 2025-12-04T10:14:41.2400505Z * [new branch] gh/v0i0/18/base -> origin/gh/v0i0/18/base 2025-12-04T10:14:41.2400569Z * [new branch] gh/v0i0/18/head -> origin/gh/v0i0/18/head 2025-12-04T10:14:41.2400669Z * [new branch] gh/v0i0/18/orig -> origin/gh/v0i0/18/orig 2025-12-04T10:14:41.2400734Z * [new branch] gh/v0i0/19/base -> origin/gh/v0i0/19/base 2025-12-04T10:14:41.2400797Z * [new branch] gh/v0i0/19/head -> origin/gh/v0i0/19/head 2025-12-04T10:14:41.2400907Z * [new branch] gh/v0i0/19/orig -> origin/gh/v0i0/19/orig 2025-12-04T10:14:41.2400988Z * [new branch] gh/vishal9-team/1/base -> origin/gh/vishal9-team/1/base 2025-12-04T10:14:41.2401065Z * [new branch] gh/vishal9-team/1/head -> origin/gh/vishal9-team/1/head 2025-12-04T10:14:41.2401140Z * [new branch] gh/vishal9-team/2/base -> origin/gh/vishal9-team/2/base 2025-12-04T10:14:41.2401214Z * [new branch] gh/vishal9-team/2/head -> origin/gh/vishal9-team/2/head 2025-12-04T10:14:41.2401287Z * [new branch] gh/vishal9-team/2/orig -> origin/gh/vishal9-team/2/orig 2025-12-04T10:14:41.2401361Z * [new branch] gh/vishal9-team/3/base -> origin/gh/vishal9-team/3/base 2025-12-04T10:14:41.2401434Z * [new branch] gh/vishal9-team/3/head -> origin/gh/vishal9-team/3/head 2025-12-04T10:14:41.2401506Z * [new branch] gh/vishal9-team/3/orig -> origin/gh/vishal9-team/3/orig 2025-12-04T10:14:41.2401579Z * [new branch] gh/vishal9-team/4/base -> origin/gh/vishal9-team/4/base 2025-12-04T10:14:41.2401653Z * [new branch] gh/vishal9-team/4/head -> origin/gh/vishal9-team/4/head 2025-12-04T10:14:41.2401725Z * [new branch] gh/vishal9-team/4/orig -> origin/gh/vishal9-team/4/orig 2025-12-04T10:14:41.2401790Z * [new branch] gh/vkuzo/1/next -> origin/gh/vkuzo/1/next 2025-12-04T10:14:41.2401856Z * [new branch] gh/vkuzo/2/next -> origin/gh/vkuzo/2/next 2025-12-04T10:14:41.2401921Z * [new branch] gh/vkuzo/3/next -> origin/gh/vkuzo/3/next 2025-12-04T10:14:41.2401994Z * [new branch] gh/wconstab/424/base -> origin/gh/wconstab/424/base 2025-12-04T10:14:41.2402067Z * [new branch] gh/wconstab/424/head -> origin/gh/wconstab/424/head 2025-12-04T10:14:41.2402138Z * [new branch] gh/wconstab/424/orig -> origin/gh/wconstab/424/orig 2025-12-04T10:14:41.2402209Z * [new branch] gh/wconstab/435/base -> origin/gh/wconstab/435/base 2025-12-04T10:14:41.2402279Z * [new branch] gh/wconstab/435/head -> origin/gh/wconstab/435/head 2025-12-04T10:14:41.2402348Z * [new branch] gh/wconstab/435/orig -> origin/gh/wconstab/435/orig 2025-12-04T10:14:41.2402417Z * [new branch] gh/wconstab/444/base -> origin/gh/wconstab/444/base 2025-12-04T10:14:41.2402487Z * [new branch] gh/wconstab/444/head -> origin/gh/wconstab/444/head 2025-12-04T10:14:41.2402597Z * [new branch] gh/wconstab/444/orig -> origin/gh/wconstab/444/orig 2025-12-04T10:14:41.2402667Z * [new branch] gh/wconstab/447/base -> origin/gh/wconstab/447/base 2025-12-04T10:14:41.2402736Z * [new branch] gh/wconstab/447/head -> origin/gh/wconstab/447/head 2025-12-04T10:14:41.2402805Z * [new branch] gh/wconstab/447/orig -> origin/gh/wconstab/447/orig 2025-12-04T10:14:41.2402875Z * [new branch] gh/wconstab/448/base -> origin/gh/wconstab/448/base 2025-12-04T10:14:41.2402944Z * [new branch] gh/wconstab/448/head -> origin/gh/wconstab/448/head 2025-12-04T10:14:41.2403014Z * [new branch] gh/wconstab/448/orig -> origin/gh/wconstab/448/orig 2025-12-04T10:14:41.2403083Z * [new branch] gh/wconstab/449/base -> origin/gh/wconstab/449/base 2025-12-04T10:14:41.2403152Z * [new branch] gh/wconstab/449/head -> origin/gh/wconstab/449/head 2025-12-04T10:14:41.2403224Z * [new branch] gh/wconstab/449/orig -> origin/gh/wconstab/449/orig 2025-12-04T10:14:41.2403294Z * [new branch] gh/wconstab/450/base -> origin/gh/wconstab/450/base 2025-12-04T10:14:41.2403363Z * [new branch] gh/wconstab/450/head -> origin/gh/wconstab/450/head 2025-12-04T10:14:41.2403432Z * [new branch] gh/wconstab/450/orig -> origin/gh/wconstab/450/orig 2025-12-04T10:14:41.2403537Z * [new branch] gh/wconstab/451/base -> origin/gh/wconstab/451/base 2025-12-04T10:14:41.2403607Z * [new branch] gh/wconstab/451/head -> origin/gh/wconstab/451/head 2025-12-04T10:14:41.2403676Z * [new branch] gh/wconstab/451/orig -> origin/gh/wconstab/451/orig 2025-12-04T10:14:41.2403746Z * [new branch] gh/wconstab/452/base -> origin/gh/wconstab/452/base 2025-12-04T10:14:41.2403815Z * [new branch] gh/wconstab/452/head -> origin/gh/wconstab/452/head 2025-12-04T10:14:41.2403885Z * [new branch] gh/wconstab/452/orig -> origin/gh/wconstab/452/orig 2025-12-04T10:14:41.2403956Z * [new branch] gh/wconstab/453/base -> origin/gh/wconstab/453/base 2025-12-04T10:14:41.2404025Z * [new branch] gh/wconstab/453/head -> origin/gh/wconstab/453/head 2025-12-04T10:14:41.2404095Z * [new branch] gh/wconstab/453/orig -> origin/gh/wconstab/453/orig 2025-12-04T10:14:41.2404166Z * [new branch] gh/wconstab/454/base -> origin/gh/wconstab/454/base 2025-12-04T10:14:41.2404235Z * [new branch] gh/wconstab/454/head -> origin/gh/wconstab/454/head 2025-12-04T10:14:41.2404305Z * [new branch] gh/wconstab/454/orig -> origin/gh/wconstab/454/orig 2025-12-04T10:14:41.2404375Z * [new branch] gh/wconstab/455/base -> origin/gh/wconstab/455/base 2025-12-04T10:14:41.2404444Z * [new branch] gh/wconstab/455/head -> origin/gh/wconstab/455/head 2025-12-04T10:14:41.2404516Z * [new branch] gh/wconstab/455/orig -> origin/gh/wconstab/455/orig 2025-12-04T10:14:41.2404585Z * [new branch] gh/wconstab/456/base -> origin/gh/wconstab/456/base 2025-12-04T10:14:41.2404655Z * [new branch] gh/wconstab/456/head -> origin/gh/wconstab/456/head 2025-12-04T10:14:41.2404727Z * [new branch] gh/wconstab/456/orig -> origin/gh/wconstab/456/orig 2025-12-04T10:14:41.2404796Z * [new branch] gh/wconstab/457/base -> origin/gh/wconstab/457/base 2025-12-04T10:14:41.2404866Z * [new branch] gh/wconstab/457/head -> origin/gh/wconstab/457/head 2025-12-04T10:14:41.2404936Z * [new branch] gh/wconstab/457/orig -> origin/gh/wconstab/457/orig 2025-12-04T10:14:41.2405005Z * [new branch] gh/wconstab/458/base -> origin/gh/wconstab/458/base 2025-12-04T10:14:41.2405075Z * [new branch] gh/wconstab/458/head -> origin/gh/wconstab/458/head 2025-12-04T10:14:41.2405174Z * [new branch] gh/wconstab/458/orig -> origin/gh/wconstab/458/orig 2025-12-04T10:14:41.2405244Z * [new branch] gh/wconstab/459/base -> origin/gh/wconstab/459/base 2025-12-04T10:14:41.2405313Z * [new branch] gh/wconstab/459/head -> origin/gh/wconstab/459/head 2025-12-04T10:14:41.2405384Z * [new branch] gh/wconstab/459/orig -> origin/gh/wconstab/459/orig 2025-12-04T10:14:41.2405453Z * [new branch] gh/wconstab/460/base -> origin/gh/wconstab/460/base 2025-12-04T10:14:41.2405523Z * [new branch] gh/wconstab/460/head -> origin/gh/wconstab/460/head 2025-12-04T10:14:41.2405593Z * [new branch] gh/wconstab/460/orig -> origin/gh/wconstab/460/orig 2025-12-04T10:14:41.2405662Z * [new branch] gh/wconstab/461/base -> origin/gh/wconstab/461/base 2025-12-04T10:14:41.2405732Z * [new branch] gh/wconstab/461/head -> origin/gh/wconstab/461/head 2025-12-04T10:14:41.2405804Z * [new branch] gh/wconstab/461/orig -> origin/gh/wconstab/461/orig 2025-12-04T10:14:41.2405873Z * [new branch] gh/wconstab/462/base -> origin/gh/wconstab/462/base 2025-12-04T10:14:41.2405943Z * [new branch] gh/wconstab/462/head -> origin/gh/wconstab/462/head 2025-12-04T10:14:41.2406036Z * [new branch] gh/wconstab/462/orig -> origin/gh/wconstab/462/orig 2025-12-04T10:14:41.2406106Z * [new branch] gh/wconstab/463/base -> origin/gh/wconstab/463/base 2025-12-04T10:14:41.2406175Z * [new branch] gh/wconstab/463/head -> origin/gh/wconstab/463/head 2025-12-04T10:14:41.2406244Z * [new branch] gh/wconstab/463/orig -> origin/gh/wconstab/463/orig 2025-12-04T10:14:41.2406313Z * [new branch] gh/wconstab/464/base -> origin/gh/wconstab/464/base 2025-12-04T10:14:41.2406383Z * [new branch] gh/wconstab/464/head -> origin/gh/wconstab/464/head 2025-12-04T10:14:41.2406454Z * [new branch] gh/wconstab/464/orig -> origin/gh/wconstab/464/orig 2025-12-04T10:14:41.2406526Z * [new branch] gh/wconstab/465/base -> origin/gh/wconstab/465/base 2025-12-04T10:14:41.2406596Z * [new branch] gh/wconstab/465/head -> origin/gh/wconstab/465/head 2025-12-04T10:14:41.2406667Z * [new branch] gh/wconstab/465/orig -> origin/gh/wconstab/465/orig 2025-12-04T10:14:41.2406737Z * [new branch] gh/wconstab/466/base -> origin/gh/wconstab/466/base 2025-12-04T10:14:41.2406809Z * [new branch] gh/wconstab/466/head -> origin/gh/wconstab/466/head 2025-12-04T10:14:41.2406878Z * [new branch] gh/wconstab/466/orig -> origin/gh/wconstab/466/orig 2025-12-04T10:14:41.2406947Z * [new branch] gh/wconstab/467/base -> origin/gh/wconstab/467/base 2025-12-04T10:14:41.2407017Z * [new branch] gh/wconstab/467/head -> origin/gh/wconstab/467/head 2025-12-04T10:14:41.2407088Z * [new branch] gh/wconstab/467/orig -> origin/gh/wconstab/467/orig 2025-12-04T10:14:41.2407158Z * [new branch] gh/wconstab/468/base -> origin/gh/wconstab/468/base 2025-12-04T10:14:41.2407227Z * [new branch] gh/wconstab/468/head -> origin/gh/wconstab/468/head 2025-12-04T10:14:41.2407297Z * [new branch] gh/wconstab/468/orig -> origin/gh/wconstab/468/orig 2025-12-04T10:14:41.2407369Z * [new branch] gh/weifengpy/39/base -> origin/gh/weifengpy/39/base 2025-12-04T10:14:41.2407440Z * [new branch] gh/weifengpy/39/head -> origin/gh/weifengpy/39/head 2025-12-04T10:14:41.2407510Z * [new branch] gh/weifengpy/39/orig -> origin/gh/weifengpy/39/orig 2025-12-04T10:14:41.2407581Z * [new branch] gh/weifengpy/40/base -> origin/gh/weifengpy/40/base 2025-12-04T10:14:41.2407651Z * [new branch] gh/weifengpy/40/head -> origin/gh/weifengpy/40/head 2025-12-04T10:14:41.2407747Z * [new branch] gh/weifengpy/40/orig -> origin/gh/weifengpy/40/orig 2025-12-04T10:14:41.2407818Z * [new branch] gh/weifengpy/41/base -> origin/gh/weifengpy/41/base 2025-12-04T10:14:41.2407888Z * [new branch] gh/weifengpy/41/head -> origin/gh/weifengpy/41/head 2025-12-04T10:14:41.2407961Z * [new branch] gh/weifengpy/41/orig -> origin/gh/weifengpy/41/orig 2025-12-04T10:14:41.2408043Z * [new branch] gh/williamwen42/250/base -> origin/gh/williamwen42/250/base 2025-12-04T10:14:41.2408122Z * [new branch] gh/williamwen42/250/head -> origin/gh/williamwen42/250/head 2025-12-04T10:14:41.2408200Z * [new branch] gh/williamwen42/250/orig -> origin/gh/williamwen42/250/orig 2025-12-04T10:14:41.2408278Z * [new branch] gh/williamwen42/279/base -> origin/gh/williamwen42/279/base 2025-12-04T10:14:41.2408356Z * [new branch] gh/williamwen42/279/head -> origin/gh/williamwen42/279/head 2025-12-04T10:14:41.2408432Z * [new branch] gh/williamwen42/279/orig -> origin/gh/williamwen42/279/orig 2025-12-04T10:14:41.2408508Z * [new branch] gh/williamwen42/282/base -> origin/gh/williamwen42/282/base 2025-12-04T10:14:41.2408584Z * [new branch] gh/williamwen42/282/head -> origin/gh/williamwen42/282/head 2025-12-04T10:14:41.2408693Z * [new branch] gh/williamwen42/282/orig -> origin/gh/williamwen42/282/orig 2025-12-04T10:14:41.2408769Z * [new branch] gh/williamwen42/287/base -> origin/gh/williamwen42/287/base 2025-12-04T10:14:41.2408845Z * [new branch] gh/williamwen42/287/head -> origin/gh/williamwen42/287/head 2025-12-04T10:14:41.2408921Z * [new branch] gh/williamwen42/287/orig -> origin/gh/williamwen42/287/orig 2025-12-04T10:14:41.2408997Z * [new branch] gh/williamwen42/288/base -> origin/gh/williamwen42/288/base 2025-12-04T10:14:41.2409074Z * [new branch] gh/williamwen42/288/head -> origin/gh/williamwen42/288/head 2025-12-04T10:14:41.2409152Z * [new branch] gh/williamwen42/288/orig -> origin/gh/williamwen42/288/orig 2025-12-04T10:14:41.2409228Z * [new branch] gh/williamwen42/296/base -> origin/gh/williamwen42/296/base 2025-12-04T10:14:41.2409305Z * [new branch] gh/williamwen42/296/head -> origin/gh/williamwen42/296/head 2025-12-04T10:14:41.2409383Z * [new branch] gh/williamwen42/296/orig -> origin/gh/williamwen42/296/orig 2025-12-04T10:14:41.2409460Z * [new branch] gh/williamwen42/297/base -> origin/gh/williamwen42/297/base 2025-12-04T10:14:41.2409537Z * [new branch] gh/williamwen42/297/head -> origin/gh/williamwen42/297/head 2025-12-04T10:14:41.2409614Z * [new branch] gh/williamwen42/297/orig -> origin/gh/williamwen42/297/orig 2025-12-04T10:14:41.2409692Z * [new branch] gh/williamwen42/306/base -> origin/gh/williamwen42/306/base 2025-12-04T10:14:41.2409768Z * [new branch] gh/williamwen42/306/head -> origin/gh/williamwen42/306/head 2025-12-04T10:14:41.2409844Z * [new branch] gh/williamwen42/306/orig -> origin/gh/williamwen42/306/orig 2025-12-04T10:14:41.2409920Z * [new branch] gh/williamwen42/309/base -> origin/gh/williamwen42/309/base 2025-12-04T10:14:41.2409998Z * [new branch] gh/williamwen42/309/head -> origin/gh/williamwen42/309/head 2025-12-04T10:14:41.2410074Z * [new branch] gh/williamwen42/309/orig -> origin/gh/williamwen42/309/orig 2025-12-04T10:14:41.2410150Z * [new branch] gh/williamwen42/310/base -> origin/gh/williamwen42/310/base 2025-12-04T10:14:41.2410227Z * [new branch] gh/williamwen42/310/head -> origin/gh/williamwen42/310/head 2025-12-04T10:14:41.2410303Z * [new branch] gh/williamwen42/310/orig -> origin/gh/williamwen42/310/orig 2025-12-04T10:14:41.2410403Z * [new branch] gh/williamwen42/311/base -> origin/gh/williamwen42/311/base 2025-12-04T10:14:41.2410480Z * [new branch] gh/williamwen42/311/head -> origin/gh/williamwen42/311/head 2025-12-04T10:14:41.2410555Z * [new branch] gh/williamwen42/311/orig -> origin/gh/williamwen42/311/orig 2025-12-04T10:14:41.2410680Z * [new branch] gh/williamwen42/319/base -> origin/gh/williamwen42/319/base 2025-12-04T10:14:41.2410759Z * [new branch] gh/williamwen42/319/head -> origin/gh/williamwen42/319/head 2025-12-04T10:14:41.2410835Z * [new branch] gh/williamwen42/319/orig -> origin/gh/williamwen42/319/orig 2025-12-04T10:14:41.2410912Z * [new branch] gh/williamwen42/325/base -> origin/gh/williamwen42/325/base 2025-12-04T10:14:41.2410989Z * [new branch] gh/williamwen42/325/head -> origin/gh/williamwen42/325/head 2025-12-04T10:14:41.2411066Z * [new branch] gh/williamwen42/325/orig -> origin/gh/williamwen42/325/orig 2025-12-04T10:14:41.2411144Z * [new branch] gh/williamwen42/326/base -> origin/gh/williamwen42/326/base 2025-12-04T10:14:41.2411221Z * [new branch] gh/williamwen42/326/head -> origin/gh/williamwen42/326/head 2025-12-04T10:14:41.2411298Z * [new branch] gh/williamwen42/326/orig -> origin/gh/williamwen42/326/orig 2025-12-04T10:14:41.2411417Z * [new branch] gh/williamwen42/327/base -> origin/gh/williamwen42/327/base 2025-12-04T10:14:41.2411496Z * [new branch] gh/williamwen42/327/head -> origin/gh/williamwen42/327/head 2025-12-04T10:14:41.2411572Z * [new branch] gh/williamwen42/327/orig -> origin/gh/williamwen42/327/orig 2025-12-04T10:14:41.2411649Z * [new branch] gh/williamwen42/328/base -> origin/gh/williamwen42/328/base 2025-12-04T10:14:41.2411725Z * [new branch] gh/williamwen42/328/head -> origin/gh/williamwen42/328/head 2025-12-04T10:14:41.2411803Z * [new branch] gh/williamwen42/328/orig -> origin/gh/williamwen42/328/orig 2025-12-04T10:14:41.2411879Z * [new branch] gh/williamwen42/329/base -> origin/gh/williamwen42/329/base 2025-12-04T10:14:41.2411955Z * [new branch] gh/williamwen42/329/head -> origin/gh/williamwen42/329/head 2025-12-04T10:14:41.2412032Z * [new branch] gh/williamwen42/329/orig -> origin/gh/williamwen42/329/orig 2025-12-04T10:14:41.2412110Z * [new branch] gh/williamwen42/330/base -> origin/gh/williamwen42/330/base 2025-12-04T10:14:41.2412185Z * [new branch] gh/williamwen42/330/head -> origin/gh/williamwen42/330/head 2025-12-04T10:14:41.2412262Z * [new branch] gh/williamwen42/330/orig -> origin/gh/williamwen42/330/orig 2025-12-04T10:14:41.2412340Z * [new branch] gh/williamwen42/331/base -> origin/gh/williamwen42/331/base 2025-12-04T10:14:41.2412416Z * [new branch] gh/williamwen42/331/head -> origin/gh/williamwen42/331/head 2025-12-04T10:14:41.2412494Z * [new branch] gh/williamwen42/331/orig -> origin/gh/williamwen42/331/orig 2025-12-04T10:14:41.2412571Z * [new branch] gh/williamwen42/332/base -> origin/gh/williamwen42/332/base 2025-12-04T10:14:41.2412647Z * [new branch] gh/williamwen42/332/head -> origin/gh/williamwen42/332/head 2025-12-04T10:14:41.2412723Z * [new branch] gh/williamwen42/332/orig -> origin/gh/williamwen42/332/orig 2025-12-04T10:14:41.2412800Z * [new branch] gh/williamwen42/333/base -> origin/gh/williamwen42/333/base 2025-12-04T10:14:41.2412876Z * [new branch] gh/williamwen42/333/head -> origin/gh/williamwen42/333/head 2025-12-04T10:14:41.2412951Z * [new branch] gh/williamwen42/333/orig -> origin/gh/williamwen42/333/orig 2025-12-04T10:14:41.2413028Z * [new branch] gh/williamwen42/334/base -> origin/gh/williamwen42/334/base 2025-12-04T10:14:41.2413142Z * [new branch] gh/williamwen42/334/head -> origin/gh/williamwen42/334/head 2025-12-04T10:14:41.2413219Z * [new branch] gh/williamwen42/334/orig -> origin/gh/williamwen42/334/orig 2025-12-04T10:14:41.2413295Z * [new branch] gh/williamwen42/335/base -> origin/gh/williamwen42/335/base 2025-12-04T10:14:41.2413372Z * [new branch] gh/williamwen42/335/head -> origin/gh/williamwen42/335/head 2025-12-04T10:14:41.2413449Z * [new branch] gh/williamwen42/335/orig -> origin/gh/williamwen42/335/orig 2025-12-04T10:14:41.2413525Z * [new branch] gh/williamwen42/336/base -> origin/gh/williamwen42/336/base 2025-12-04T10:14:41.2413601Z * [new branch] gh/williamwen42/336/head -> origin/gh/williamwen42/336/head 2025-12-04T10:14:41.2413678Z * [new branch] gh/williamwen42/336/orig -> origin/gh/williamwen42/336/orig 2025-12-04T10:14:41.2413754Z * [new branch] gh/williamwen42/337/base -> origin/gh/williamwen42/337/base 2025-12-04T10:14:41.2413832Z * [new branch] gh/williamwen42/337/head -> origin/gh/williamwen42/337/head 2025-12-04T10:14:41.2413910Z * [new branch] gh/williamwen42/337/orig -> origin/gh/williamwen42/337/orig 2025-12-04T10:14:41.2413986Z * [new branch] gh/williamwen42/338/base -> origin/gh/williamwen42/338/base 2025-12-04T10:14:41.2414094Z * [new branch] gh/williamwen42/338/head -> origin/gh/williamwen42/338/head 2025-12-04T10:14:41.2414172Z * [new branch] gh/williamwen42/338/orig -> origin/gh/williamwen42/338/orig 2025-12-04T10:14:41.2414248Z * [new branch] gh/williamwen42/339/base -> origin/gh/williamwen42/339/base 2025-12-04T10:14:41.2414323Z * [new branch] gh/williamwen42/339/head -> origin/gh/williamwen42/339/head 2025-12-04T10:14:41.2414401Z * [new branch] gh/williamwen42/339/orig -> origin/gh/williamwen42/339/orig 2025-12-04T10:14:41.2414478Z * [new branch] gh/williamwen42/340/base -> origin/gh/williamwen42/340/base 2025-12-04T10:14:41.2414555Z * [new branch] gh/williamwen42/340/head -> origin/gh/williamwen42/340/head 2025-12-04T10:14:41.2414631Z * [new branch] gh/williamwen42/340/orig -> origin/gh/williamwen42/340/orig 2025-12-04T10:14:41.2414708Z * [new branch] gh/williamwen42/341/base -> origin/gh/williamwen42/341/base 2025-12-04T10:14:41.2414785Z * [new branch] gh/williamwen42/341/head -> origin/gh/williamwen42/341/head 2025-12-04T10:14:41.2414861Z * [new branch] gh/williamwen42/341/orig -> origin/gh/williamwen42/341/orig 2025-12-04T10:14:41.2414937Z * [new branch] gh/williamwen42/342/base -> origin/gh/williamwen42/342/base 2025-12-04T10:14:41.2415014Z * [new branch] gh/williamwen42/342/head -> origin/gh/williamwen42/342/head 2025-12-04T10:14:41.2415090Z * [new branch] gh/williamwen42/342/orig -> origin/gh/williamwen42/342/orig 2025-12-04T10:14:41.2415168Z * [new branch] gh/williamwen42/343/base -> origin/gh/williamwen42/343/base 2025-12-04T10:14:41.2415245Z * [new branch] gh/williamwen42/343/head -> origin/gh/williamwen42/343/head 2025-12-04T10:14:41.2415324Z * [new branch] gh/williamwen42/343/orig -> origin/gh/williamwen42/343/orig 2025-12-04T10:14:41.2415401Z * [new branch] gh/williamwen42/344/base -> origin/gh/williamwen42/344/base 2025-12-04T10:14:41.2415478Z * [new branch] gh/williamwen42/344/head -> origin/gh/williamwen42/344/head 2025-12-04T10:14:41.2415554Z * [new branch] gh/williamwen42/344/orig -> origin/gh/williamwen42/344/orig 2025-12-04T10:14:41.2415630Z * [new branch] gh/williamwen42/345/base -> origin/gh/williamwen42/345/base 2025-12-04T10:14:41.2415707Z * [new branch] gh/williamwen42/345/head -> origin/gh/williamwen42/345/head 2025-12-04T10:14:41.2415811Z * [new branch] gh/williamwen42/345/orig -> origin/gh/williamwen42/345/orig 2025-12-04T10:14:41.2415886Z * [new branch] gh/williamwen42/346/base -> origin/gh/williamwen42/346/base 2025-12-04T10:14:41.2415964Z * [new branch] gh/williamwen42/346/head -> origin/gh/williamwen42/346/head 2025-12-04T10:14:41.2416041Z * [new branch] gh/williamwen42/346/orig -> origin/gh/williamwen42/346/orig 2025-12-04T10:14:41.2416118Z * [new branch] gh/williamwen42/347/base -> origin/gh/williamwen42/347/base 2025-12-04T10:14:41.2416194Z * [new branch] gh/williamwen42/347/head -> origin/gh/williamwen42/347/head 2025-12-04T10:14:41.2416270Z * [new branch] gh/williamwen42/347/orig -> origin/gh/williamwen42/347/orig 2025-12-04T10:14:41.2416347Z * [new branch] gh/williamwen42/348/base -> origin/gh/williamwen42/348/base 2025-12-04T10:14:41.2416423Z * [new branch] gh/williamwen42/348/head -> origin/gh/williamwen42/348/head 2025-12-04T10:14:41.2416500Z * [new branch] gh/williamwen42/348/orig -> origin/gh/williamwen42/348/orig 2025-12-04T10:14:41.2416577Z * [new branch] gh/williamwen42/349/base -> origin/gh/williamwen42/349/base 2025-12-04T10:14:41.2416653Z * [new branch] gh/williamwen42/349/head -> origin/gh/williamwen42/349/head 2025-12-04T10:14:41.2416757Z * [new branch] gh/williamwen42/349/orig -> origin/gh/williamwen42/349/orig 2025-12-04T10:14:41.2416835Z * [new branch] gh/williamwen42/350/base -> origin/gh/williamwen42/350/base 2025-12-04T10:14:41.2416911Z * [new branch] gh/williamwen42/350/head -> origin/gh/williamwen42/350/head 2025-12-04T10:14:41.2416987Z * [new branch] gh/williamwen42/350/orig -> origin/gh/williamwen42/350/orig 2025-12-04T10:14:41.2417064Z * [new branch] gh/williamwen42/351/base -> origin/gh/williamwen42/351/base 2025-12-04T10:14:41.2417141Z * [new branch] gh/williamwen42/351/head -> origin/gh/williamwen42/351/head 2025-12-04T10:14:41.2417217Z * [new branch] gh/williamwen42/351/orig -> origin/gh/williamwen42/351/orig 2025-12-04T10:14:41.2417294Z * [new branch] gh/williamwen42/352/base -> origin/gh/williamwen42/352/base 2025-12-04T10:14:41.2417370Z * [new branch] gh/williamwen42/352/head -> origin/gh/williamwen42/352/head 2025-12-04T10:14:41.2417448Z * [new branch] gh/williamwen42/352/orig -> origin/gh/williamwen42/352/orig 2025-12-04T10:14:41.2417525Z * [new branch] gh/williamwen42/353/base -> origin/gh/williamwen42/353/base 2025-12-04T10:14:41.2417602Z * [new branch] gh/williamwen42/353/head -> origin/gh/williamwen42/353/head 2025-12-04T10:14:41.2417680Z * [new branch] gh/williamwen42/353/orig -> origin/gh/williamwen42/353/orig 2025-12-04T10:14:41.2417756Z * [new branch] gh/williamwen42/354/base -> origin/gh/williamwen42/354/base 2025-12-04T10:14:41.2417832Z * [new branch] gh/williamwen42/354/head -> origin/gh/williamwen42/354/head 2025-12-04T10:14:41.2417909Z * [new branch] gh/williamwen42/354/orig -> origin/gh/williamwen42/354/orig 2025-12-04T10:14:41.2417984Z * [new branch] gh/williamwen42/355/base -> origin/gh/williamwen42/355/base 2025-12-04T10:14:41.2418062Z * [new branch] gh/williamwen42/355/head -> origin/gh/williamwen42/355/head 2025-12-04T10:14:41.2418139Z * [new branch] gh/williamwen42/355/orig -> origin/gh/williamwen42/355/orig 2025-12-04T10:14:41.2418215Z * [new branch] gh/williamwen42/356/base -> origin/gh/williamwen42/356/base 2025-12-04T10:14:41.2418291Z * [new branch] gh/williamwen42/356/head -> origin/gh/williamwen42/356/head 2025-12-04T10:14:41.2418368Z * [new branch] gh/williamwen42/356/orig -> origin/gh/williamwen42/356/orig 2025-12-04T10:14:41.2418477Z * [new branch] gh/williamwen42/357/base -> origin/gh/williamwen42/357/base 2025-12-04T10:14:41.2418554Z * [new branch] gh/williamwen42/357/head -> origin/gh/williamwen42/357/head 2025-12-04T10:14:41.2418631Z * [new branch] gh/williamwen42/357/orig -> origin/gh/williamwen42/357/orig 2025-12-04T10:14:41.2418706Z * [new branch] gh/williamwen42/358/base -> origin/gh/williamwen42/358/base 2025-12-04T10:14:41.2418784Z * [new branch] gh/williamwen42/358/head -> origin/gh/williamwen42/358/head 2025-12-04T10:14:41.2418861Z * [new branch] gh/williamwen42/358/orig -> origin/gh/williamwen42/358/orig 2025-12-04T10:14:41.2418930Z * [new branch] gh/xmfan/169/base -> origin/gh/xmfan/169/base 2025-12-04T10:14:41.2418999Z * [new branch] gh/xmfan/169/head -> origin/gh/xmfan/169/head 2025-12-04T10:14:41.2419068Z * [new branch] gh/xmfan/170/base -> origin/gh/xmfan/170/base 2025-12-04T10:14:41.2419135Z * [new branch] gh/xmfan/170/head -> origin/gh/xmfan/170/head 2025-12-04T10:14:41.2419202Z * [new branch] gh/xmfan/274/base -> origin/gh/xmfan/274/base 2025-12-04T10:14:41.2419267Z * [new branch] gh/xmfan/274/head -> origin/gh/xmfan/274/head 2025-12-04T10:14:41.2419332Z * [new branch] gh/xmfan/274/orig -> origin/gh/xmfan/274/orig 2025-12-04T10:14:41.2419427Z * [new branch] gh/xmfan/277/base -> origin/gh/xmfan/277/base 2025-12-04T10:14:41.2419493Z * [new branch] gh/xmfan/277/head -> origin/gh/xmfan/277/head 2025-12-04T10:14:41.2419558Z * [new branch] gh/xmfan/277/orig -> origin/gh/xmfan/277/orig 2025-12-04T10:14:41.2419624Z * [new branch] gh/xmfan/301/base -> origin/gh/xmfan/301/base 2025-12-04T10:14:41.2419689Z * [new branch] gh/xmfan/301/head -> origin/gh/xmfan/301/head 2025-12-04T10:14:41.2419756Z * [new branch] gh/xmfan/301/orig -> origin/gh/xmfan/301/orig 2025-12-04T10:14:41.2419822Z * [new branch] gh/xmfan/304/base -> origin/gh/xmfan/304/base 2025-12-04T10:14:41.2419889Z * [new branch] gh/xmfan/304/head -> origin/gh/xmfan/304/head 2025-12-04T10:14:41.2419954Z * [new branch] gh/xmfan/304/orig -> origin/gh/xmfan/304/orig 2025-12-04T10:14:41.2420022Z * [new branch] gh/xmfan/309/base -> origin/gh/xmfan/309/base 2025-12-04T10:14:41.2420087Z * [new branch] gh/xmfan/309/head -> origin/gh/xmfan/309/head 2025-12-04T10:14:41.2420151Z * [new branch] gh/xmfan/309/orig -> origin/gh/xmfan/309/orig 2025-12-04T10:14:41.2420218Z * [new branch] gh/xmfan/310/base -> origin/gh/xmfan/310/base 2025-12-04T10:14:41.2420283Z * [new branch] gh/xmfan/310/head -> origin/gh/xmfan/310/head 2025-12-04T10:14:41.2420349Z * [new branch] gh/xmfan/310/orig -> origin/gh/xmfan/310/orig 2025-12-04T10:14:41.2420416Z * [new branch] gh/xmfan/311/base -> origin/gh/xmfan/311/base 2025-12-04T10:14:41.2420481Z * [new branch] gh/xmfan/311/head -> origin/gh/xmfan/311/head 2025-12-04T10:14:41.2420550Z * [new branch] gh/xmfan/311/orig -> origin/gh/xmfan/311/orig 2025-12-04T10:14:41.2420648Z * [new branch] gh/xmfan/312/base -> origin/gh/xmfan/312/base 2025-12-04T10:14:41.2420715Z * [new branch] gh/xmfan/312/head -> origin/gh/xmfan/312/head 2025-12-04T10:14:41.2420782Z * [new branch] gh/xmfan/312/orig -> origin/gh/xmfan/312/orig 2025-12-04T10:14:41.2420847Z * [new branch] gh/xmfan/313/base -> origin/gh/xmfan/313/base 2025-12-04T10:14:41.2420912Z * [new branch] gh/xmfan/313/head -> origin/gh/xmfan/313/head 2025-12-04T10:14:41.2420978Z * [new branch] gh/xmfan/313/orig -> origin/gh/xmfan/313/orig 2025-12-04T10:14:41.2421094Z * [new branch] gh/xuanzhang816/27/base -> origin/gh/xuanzhang816/27/base 2025-12-04T10:14:41.2421171Z * [new branch] gh/xuanzhang816/27/head -> origin/gh/xuanzhang816/27/head 2025-12-04T10:14:41.2421247Z * [new branch] gh/xuanzhang816/27/orig -> origin/gh/xuanzhang816/27/orig 2025-12-04T10:14:41.2421323Z * [new branch] gh/xuanzhang816/32/base -> origin/gh/xuanzhang816/32/base 2025-12-04T10:14:41.2421398Z * [new branch] gh/xuanzhang816/32/head -> origin/gh/xuanzhang816/32/head 2025-12-04T10:14:41.2421473Z * [new branch] gh/xuanzhang816/32/orig -> origin/gh/xuanzhang816/32/orig 2025-12-04T10:14:41.2421546Z * [new branch] gh/xuanzhang816/33/base -> origin/gh/xuanzhang816/33/base 2025-12-04T10:14:41.2421619Z * [new branch] gh/xuanzhang816/33/head -> origin/gh/xuanzhang816/33/head 2025-12-04T10:14:41.2421696Z * [new branch] gh/xuanzhang816/33/orig -> origin/gh/xuanzhang816/33/orig 2025-12-04T10:14:41.2421769Z * [new branch] gh/xuanzhang816/34/base -> origin/gh/xuanzhang816/34/base 2025-12-04T10:14:41.2421842Z * [new branch] gh/xuanzhang816/34/head -> origin/gh/xuanzhang816/34/head 2025-12-04T10:14:41.2421916Z * [new branch] gh/xuanzhang816/34/orig -> origin/gh/xuanzhang816/34/orig 2025-12-04T10:14:41.2422027Z * [new branch] gh/xuanzhang816/35/base -> origin/gh/xuanzhang816/35/base 2025-12-04T10:14:41.2422102Z * [new branch] gh/xuanzhang816/35/head -> origin/gh/xuanzhang816/35/head 2025-12-04T10:14:41.2422177Z * [new branch] gh/xuanzhang816/35/orig -> origin/gh/xuanzhang816/35/orig 2025-12-04T10:14:41.2422248Z * [new branch] gh/yanbing-j/11/base -> origin/gh/yanbing-j/11/base 2025-12-04T10:14:41.2422320Z * [new branch] gh/yanbing-j/11/head -> origin/gh/yanbing-j/11/head 2025-12-04T10:14:41.2422391Z * [new branch] gh/yanbing-j/11/orig -> origin/gh/yanbing-j/11/orig 2025-12-04T10:14:41.2422462Z * [new branch] gh/yanbing-j/12/base -> origin/gh/yanbing-j/12/base 2025-12-04T10:14:41.2422531Z * [new branch] gh/yanbing-j/12/head -> origin/gh/yanbing-j/12/head 2025-12-04T10:14:41.2422601Z * [new branch] gh/yanbing-j/12/orig -> origin/gh/yanbing-j/12/orig 2025-12-04T10:14:41.2422669Z * [new branch] gh/yanbing-j/13/base -> origin/gh/yanbing-j/13/base 2025-12-04T10:14:41.2422739Z * [new branch] gh/yanbing-j/13/head -> origin/gh/yanbing-j/13/head 2025-12-04T10:14:41.2422806Z * [new branch] gh/yanbing-j/13/orig -> origin/gh/yanbing-j/13/orig 2025-12-04T10:14:41.2422874Z * [new branch] gh/yanbing-j/14/base -> origin/gh/yanbing-j/14/base 2025-12-04T10:14:41.2422943Z * [new branch] gh/yanbing-j/14/head -> origin/gh/yanbing-j/14/head 2025-12-04T10:14:41.2423013Z * [new branch] gh/yanbing-j/14/orig -> origin/gh/yanbing-j/14/orig 2025-12-04T10:14:41.2423082Z * [new branch] gh/yanbing-j/15/base -> origin/gh/yanbing-j/15/base 2025-12-04T10:14:41.2423152Z * [new branch] gh/yanbing-j/15/head -> origin/gh/yanbing-j/15/head 2025-12-04T10:14:41.2423221Z * [new branch] gh/yanbing-j/15/orig -> origin/gh/yanbing-j/15/orig 2025-12-04T10:14:41.2423289Z * [new branch] gh/yanbing-j/18/base -> origin/gh/yanbing-j/18/base 2025-12-04T10:14:41.2423359Z * [new branch] gh/yanbing-j/18/head -> origin/gh/yanbing-j/18/head 2025-12-04T10:14:41.2423426Z * [new branch] gh/yanbing-j/18/orig -> origin/gh/yanbing-j/18/orig 2025-12-04T10:14:41.2423495Z * [new branch] gh/yanbing-j/19/base -> origin/gh/yanbing-j/19/base 2025-12-04T10:14:41.2423568Z * [new branch] gh/yanbing-j/19/head -> origin/gh/yanbing-j/19/head 2025-12-04T10:14:41.2423694Z * [new branch] gh/yanbing-j/19/orig -> origin/gh/yanbing-j/19/orig 2025-12-04T10:14:41.2423763Z * [new branch] gh/yanbing-j/20/base -> origin/gh/yanbing-j/20/base 2025-12-04T10:14:41.2423831Z * [new branch] gh/yanbing-j/20/head -> origin/gh/yanbing-j/20/head 2025-12-04T10:14:41.2423900Z * [new branch] gh/yanbing-j/20/orig -> origin/gh/yanbing-j/20/orig 2025-12-04T10:14:41.2423970Z * [new branch] gh/yanbing-j/21/base -> origin/gh/yanbing-j/21/base 2025-12-04T10:14:41.2424038Z * [new branch] gh/yanbing-j/21/head -> origin/gh/yanbing-j/21/head 2025-12-04T10:14:41.2424106Z * [new branch] gh/yanbing-j/22/base -> origin/gh/yanbing-j/22/base 2025-12-04T10:14:41.2424176Z * [new branch] gh/yanbing-j/22/head -> origin/gh/yanbing-j/22/head 2025-12-04T10:14:41.2424245Z * [new branch] gh/yanbing-j/22/orig -> origin/gh/yanbing-j/22/orig 2025-12-04T10:14:41.2424314Z * [new branch] gh/yanbing-j/23/base -> origin/gh/yanbing-j/23/base 2025-12-04T10:14:41.2424384Z * [new branch] gh/yanbing-j/23/head -> origin/gh/yanbing-j/23/head 2025-12-04T10:14:41.2424452Z * [new branch] gh/yanbing-j/23/orig -> origin/gh/yanbing-j/23/orig 2025-12-04T10:14:41.2424550Z * [new branch] gh/yanbing-j/24/base -> origin/gh/yanbing-j/24/base 2025-12-04T10:14:41.2424620Z * [new branch] gh/yanbing-j/24/head -> origin/gh/yanbing-j/24/head 2025-12-04T10:14:41.2424690Z * [new branch] gh/yanbing-j/24/orig -> origin/gh/yanbing-j/24/orig 2025-12-04T10:14:41.2424758Z * [new branch] gh/yanbing-j/25/base -> origin/gh/yanbing-j/25/base 2025-12-04T10:14:41.2424828Z * [new branch] gh/yanbing-j/25/head -> origin/gh/yanbing-j/25/head 2025-12-04T10:14:41.2424896Z * [new branch] gh/yanbing-j/25/orig -> origin/gh/yanbing-j/25/orig 2025-12-04T10:14:41.2424967Z * [new branch] gh/yanbing-j/26/base -> origin/gh/yanbing-j/26/base 2025-12-04T10:14:41.2425038Z * [new branch] gh/yanbing-j/26/head -> origin/gh/yanbing-j/26/head 2025-12-04T10:14:41.2425107Z * [new branch] gh/yanbing-j/26/orig -> origin/gh/yanbing-j/26/orig 2025-12-04T10:14:41.2425188Z * [new branch] gh/yang-yu-hang/1/base -> origin/gh/yang-yu-hang/1/base 2025-12-04T10:14:41.2425262Z * [new branch] gh/yang-yu-hang/1/head -> origin/gh/yang-yu-hang/1/head 2025-12-04T10:14:41.2425334Z * [new branch] gh/yang-yu-hang/1/orig -> origin/gh/yang-yu-hang/1/orig 2025-12-04T10:14:41.2425407Z * [new branch] gh/yang-yu-hang/2/base -> origin/gh/yang-yu-hang/2/base 2025-12-04T10:14:41.2425478Z * [new branch] gh/yang-yu-hang/2/head -> origin/gh/yang-yu-hang/2/head 2025-12-04T10:14:41.2425551Z * [new branch] gh/yang-yu-hang/2/orig -> origin/gh/yang-yu-hang/2/orig 2025-12-04T10:14:41.2425623Z * [new branch] gh/yang-yu-hang/3/base -> origin/gh/yang-yu-hang/3/base 2025-12-04T10:14:41.2425693Z * [new branch] gh/yang-yu-hang/3/head -> origin/gh/yang-yu-hang/3/head 2025-12-04T10:14:41.2425764Z * [new branch] gh/yang-yu-hang/3/orig -> origin/gh/yang-yu-hang/3/orig 2025-12-04T10:14:41.2425838Z * [new branch] gh/yangw-dev/12/base -> origin/gh/yangw-dev/12/base 2025-12-04T10:14:41.2425909Z * [new branch] gh/yangw-dev/12/head -> origin/gh/yangw-dev/12/head 2025-12-04T10:14:41.2425979Z * [new branch] gh/yangw-dev/12/orig -> origin/gh/yangw-dev/12/orig 2025-12-04T10:14:41.2426049Z * [new branch] gh/yangw-dev/13/base -> origin/gh/yangw-dev/13/base 2025-12-04T10:14:41.2426118Z * [new branch] gh/yangw-dev/13/head -> origin/gh/yangw-dev/13/head 2025-12-04T10:14:41.2426213Z * [new branch] gh/yangw-dev/13/orig -> origin/gh/yangw-dev/13/orig 2025-12-04T10:14:41.2426283Z * [new branch] gh/yangw-dev/14/base -> origin/gh/yangw-dev/14/base 2025-12-04T10:14:41.2426352Z * [new branch] gh/yangw-dev/14/head -> origin/gh/yangw-dev/14/head 2025-12-04T10:14:41.2426421Z * [new branch] gh/yangw-dev/14/orig -> origin/gh/yangw-dev/14/orig 2025-12-04T10:14:41.2426491Z * [new branch] gh/yangw-dev/15/base -> origin/gh/yangw-dev/15/base 2025-12-04T10:14:41.2426561Z * [new branch] gh/yangw-dev/15/head -> origin/gh/yangw-dev/15/head 2025-12-04T10:14:41.2426631Z * [new branch] gh/yangw-dev/15/orig -> origin/gh/yangw-dev/15/orig 2025-12-04T10:14:41.2426700Z * [new branch] gh/yangw-dev/19/base -> origin/gh/yangw-dev/19/base 2025-12-04T10:14:41.2426769Z * [new branch] gh/yangw-dev/19/head -> origin/gh/yangw-dev/19/head 2025-12-04T10:14:41.2426840Z * [new branch] gh/yangw-dev/19/orig -> origin/gh/yangw-dev/19/orig 2025-12-04T10:14:41.2426909Z * [new branch] gh/yangw-dev/26/base -> origin/gh/yangw-dev/26/base 2025-12-04T10:14:41.2426978Z * [new branch] gh/yangw-dev/26/head -> origin/gh/yangw-dev/26/head 2025-12-04T10:14:41.2427047Z * [new branch] gh/yangw-dev/26/orig -> origin/gh/yangw-dev/26/orig 2025-12-04T10:14:41.2427156Z * [new branch] gh/yangw-dev/27/base -> origin/gh/yangw-dev/27/base 2025-12-04T10:14:41.2427226Z * [new branch] gh/yangw-dev/27/head -> origin/gh/yangw-dev/27/head 2025-12-04T10:14:41.2427296Z * [new branch] gh/yangw-dev/27/orig -> origin/gh/yangw-dev/27/orig 2025-12-04T10:14:41.2427364Z * [new branch] gh/ydwu4/292/base -> origin/gh/ydwu4/292/base 2025-12-04T10:14:41.2427430Z * [new branch] gh/ydwu4/292/head -> origin/gh/ydwu4/292/head 2025-12-04T10:14:41.2427499Z * [new branch] gh/ydwu4/292/orig -> origin/gh/ydwu4/292/orig 2025-12-04T10:14:41.2427564Z * [new branch] gh/ydwu4/294/base -> origin/gh/ydwu4/294/base 2025-12-04T10:14:41.2427629Z * [new branch] gh/ydwu4/294/head -> origin/gh/ydwu4/294/head 2025-12-04T10:14:41.2427695Z * [new branch] gh/ydwu4/294/orig -> origin/gh/ydwu4/294/orig 2025-12-04T10:14:41.2427761Z * [new branch] gh/ydwu4/295/base -> origin/gh/ydwu4/295/base 2025-12-04T10:14:41.2427826Z * [new branch] gh/ydwu4/295/head -> origin/gh/ydwu4/295/head 2025-12-04T10:14:41.2427892Z * [new branch] gh/ydwu4/295/orig -> origin/gh/ydwu4/295/orig 2025-12-04T10:14:41.2427960Z * [new branch] gh/ydwu4/296/base -> origin/gh/ydwu4/296/base 2025-12-04T10:14:41.2428025Z * [new branch] gh/ydwu4/296/head -> origin/gh/ydwu4/296/head 2025-12-04T10:14:41.2428092Z * [new branch] gh/ydwu4/296/orig -> origin/gh/ydwu4/296/orig 2025-12-04T10:14:41.2428157Z * [new branch] gh/ydwu4/306/base -> origin/gh/ydwu4/306/base 2025-12-04T10:14:41.2428225Z * [new branch] gh/ydwu4/306/head -> origin/gh/ydwu4/306/head 2025-12-04T10:14:41.2428290Z * [new branch] gh/ydwu4/306/orig -> origin/gh/ydwu4/306/orig 2025-12-04T10:14:41.2428355Z * [new branch] gh/ydwu4/312/base -> origin/gh/ydwu4/312/base 2025-12-04T10:14:41.2428422Z * [new branch] gh/ydwu4/312/head -> origin/gh/ydwu4/312/head 2025-12-04T10:14:41.2428487Z * [new branch] gh/ydwu4/312/orig -> origin/gh/ydwu4/312/orig 2025-12-04T10:14:41.2428551Z * [new branch] gh/ydwu4/322/base -> origin/gh/ydwu4/322/base 2025-12-04T10:14:41.2428618Z * [new branch] gh/ydwu4/322/head -> origin/gh/ydwu4/322/head 2025-12-04T10:14:41.2428717Z * [new branch] gh/ydwu4/322/orig -> origin/gh/ydwu4/322/orig 2025-12-04T10:14:41.2428781Z * [new branch] gh/ydwu4/327/base -> origin/gh/ydwu4/327/base 2025-12-04T10:14:41.2428847Z * [new branch] gh/ydwu4/327/head -> origin/gh/ydwu4/327/head 2025-12-04T10:14:41.2428911Z * [new branch] gh/ydwu4/327/orig -> origin/gh/ydwu4/327/orig 2025-12-04T10:14:41.2428978Z * [new branch] gh/ydwu4/328/base -> origin/gh/ydwu4/328/base 2025-12-04T10:14:41.2429043Z * [new branch] gh/ydwu4/328/head -> origin/gh/ydwu4/328/head 2025-12-04T10:14:41.2429108Z * [new branch] gh/ydwu4/328/orig -> origin/gh/ydwu4/328/orig 2025-12-04T10:14:41.2429172Z * [new branch] gh/ydwu4/329/base -> origin/gh/ydwu4/329/base 2025-12-04T10:14:41.2429238Z * [new branch] gh/ydwu4/329/head -> origin/gh/ydwu4/329/head 2025-12-04T10:14:41.2429303Z * [new branch] gh/ydwu4/329/orig -> origin/gh/ydwu4/329/orig 2025-12-04T10:14:41.2429369Z * [new branch] gh/ydwu4/330/base -> origin/gh/ydwu4/330/base 2025-12-04T10:14:41.2429436Z * [new branch] gh/ydwu4/330/head -> origin/gh/ydwu4/330/head 2025-12-04T10:14:41.2429501Z * [new branch] gh/ydwu4/330/orig -> origin/gh/ydwu4/330/orig 2025-12-04T10:14:41.2429589Z * [new branch] gh/ydwu4/331/base -> origin/gh/ydwu4/331/base 2025-12-04T10:14:41.2429656Z * [new branch] gh/ydwu4/331/head -> origin/gh/ydwu4/331/head 2025-12-04T10:14:41.2429721Z * [new branch] gh/ydwu4/331/orig -> origin/gh/ydwu4/331/orig 2025-12-04T10:14:41.2429861Z * [new branch] gh/ydwu4/332/base -> origin/gh/ydwu4/332/base 2025-12-04T10:14:41.2429928Z * [new branch] gh/ydwu4/332/head -> origin/gh/ydwu4/332/head 2025-12-04T10:14:41.2429993Z * [new branch] gh/ydwu4/332/orig -> origin/gh/ydwu4/332/orig 2025-12-04T10:14:41.2430061Z * [new branch] gh/ydwu4/333/base -> origin/gh/ydwu4/333/base 2025-12-04T10:14:41.2430126Z * [new branch] gh/ydwu4/333/head -> origin/gh/ydwu4/333/head 2025-12-04T10:14:41.2430192Z * [new branch] gh/ydwu4/333/orig -> origin/gh/ydwu4/333/orig 2025-12-04T10:14:41.2430259Z * [new branch] gh/ydwu4/334/base -> origin/gh/ydwu4/334/base 2025-12-04T10:14:41.2430325Z * [new branch] gh/ydwu4/334/head -> origin/gh/ydwu4/334/head 2025-12-04T10:14:41.2430389Z * [new branch] gh/ydwu4/334/orig -> origin/gh/ydwu4/334/orig 2025-12-04T10:14:41.2430456Z * [new branch] gh/ydwu4/335/base -> origin/gh/ydwu4/335/base 2025-12-04T10:14:41.2430521Z * [new branch] gh/ydwu4/335/head -> origin/gh/ydwu4/335/head 2025-12-04T10:14:41.2430585Z * [new branch] gh/ydwu4/335/orig -> origin/gh/ydwu4/335/orig 2025-12-04T10:14:41.2430775Z * [new branch] gh/ydwu4/337/base -> origin/gh/ydwu4/337/base 2025-12-04T10:14:41.2430842Z * [new branch] gh/ydwu4/337/head -> origin/gh/ydwu4/337/head 2025-12-04T10:14:41.2430907Z * [new branch] gh/ydwu4/337/orig -> origin/gh/ydwu4/337/orig 2025-12-04T10:14:41.2430980Z * [new branch] gh/ydwu4/339/base -> origin/gh/ydwu4/339/base 2025-12-04T10:14:41.2431046Z * [new branch] gh/ydwu4/339/head -> origin/gh/ydwu4/339/head 2025-12-04T10:14:41.2431111Z * [new branch] gh/ydwu4/339/orig -> origin/gh/ydwu4/339/orig 2025-12-04T10:14:41.2431177Z * [new branch] gh/yf225/133/base -> origin/gh/yf225/133/base 2025-12-04T10:14:41.2431241Z * [new branch] gh/yf225/133/head -> origin/gh/yf225/133/head 2025-12-04T10:14:41.2431306Z * [new branch] gh/yf225/93/base -> origin/gh/yf225/93/base 2025-12-04T10:14:41.2431419Z * [new branch] gh/yf225/93/head -> origin/gh/yf225/93/head 2025-12-04T10:14:41.2431491Z * [new branch] gh/yifuwang/152/base -> origin/gh/yifuwang/152/base 2025-12-04T10:14:41.2431562Z * [new branch] gh/yifuwang/152/head -> origin/gh/yifuwang/152/head 2025-12-04T10:14:41.2431634Z * [new branch] gh/yifuwang/152/orig -> origin/gh/yifuwang/152/orig 2025-12-04T10:14:41.2431704Z * [new branch] gh/yifuwang/195/base -> origin/gh/yifuwang/195/base 2025-12-04T10:14:41.2431775Z * [new branch] gh/yifuwang/195/head -> origin/gh/yifuwang/195/head 2025-12-04T10:14:41.2431844Z * [new branch] gh/yifuwang/195/orig -> origin/gh/yifuwang/195/orig 2025-12-04T10:14:41.2431915Z * [new branch] gh/yiming0416/1/base -> origin/gh/yiming0416/1/base 2025-12-04T10:14:41.2431989Z * [new branch] gh/yiming0416/1/head -> origin/gh/yiming0416/1/head 2025-12-04T10:14:41.2432059Z * [new branch] gh/yiming0416/2/base -> origin/gh/yiming0416/2/base 2025-12-04T10:14:41.2432129Z * [new branch] gh/yiming0416/2/head -> origin/gh/yiming0416/2/head 2025-12-04T10:14:41.2432202Z * [new branch] gh/yushangdi/1/base -> origin/gh/yushangdi/1/base 2025-12-04T10:14:41.2432317Z * [new branch] gh/yushangdi/1/head -> origin/gh/yushangdi/1/head 2025-12-04T10:14:41.2432388Z * [new branch] gh/yushangdi/10/base -> origin/gh/yushangdi/10/base 2025-12-04T10:14:41.2432461Z * [new branch] gh/yushangdi/10/head -> origin/gh/yushangdi/10/head 2025-12-04T10:14:41.2432531Z * [new branch] gh/yushangdi/10/orig -> origin/gh/yushangdi/10/orig 2025-12-04T10:14:41.2432600Z * [new branch] gh/yushangdi/11/base -> origin/gh/yushangdi/11/base 2025-12-04T10:14:41.2432672Z * [new branch] gh/yushangdi/11/head -> origin/gh/yushangdi/11/head 2025-12-04T10:14:41.2432743Z * [new branch] gh/yushangdi/11/orig -> origin/gh/yushangdi/11/orig 2025-12-04T10:14:41.2432813Z * [new branch] gh/yushangdi/2/base -> origin/gh/yushangdi/2/base 2025-12-04T10:14:41.2432883Z * [new branch] gh/yushangdi/2/head -> origin/gh/yushangdi/2/head 2025-12-04T10:14:41.2432955Z * [new branch] gh/yushangdi/7/base -> origin/gh/yushangdi/7/base 2025-12-04T10:14:41.2433024Z * [new branch] gh/yushangdi/7/head -> origin/gh/yushangdi/7/head 2025-12-04T10:14:41.2433095Z * [new branch] gh/yushangdi/7/orig -> origin/gh/yushangdi/7/orig 2025-12-04T10:14:41.2433163Z * [new branch] gh/yushangdi/8/base -> origin/gh/yushangdi/8/base 2025-12-04T10:14:41.2433232Z * [new branch] gh/yushangdi/8/head -> origin/gh/yushangdi/8/head 2025-12-04T10:14:41.2433301Z * [new branch] gh/yushangdi/8/orig -> origin/gh/yushangdi/8/orig 2025-12-04T10:14:41.2433371Z * [new branch] gh/yushangdi/9/base -> origin/gh/yushangdi/9/base 2025-12-04T10:14:41.2433441Z * [new branch] gh/yushangdi/9/head -> origin/gh/yushangdi/9/head 2025-12-04T10:14:41.2433509Z * [new branch] gh/yushangdi/9/orig -> origin/gh/yushangdi/9/orig 2025-12-04T10:14:41.2433578Z * [new branch] gh/zklaus/19/base -> origin/gh/zklaus/19/base 2025-12-04T10:14:41.2433646Z * [new branch] gh/zklaus/19/head -> origin/gh/zklaus/19/head 2025-12-04T10:14:41.2433713Z * [new branch] gh/zklaus/19/orig -> origin/gh/zklaus/19/orig 2025-12-04T10:14:41.2433779Z * [new branch] gh/zklaus/20/base -> origin/gh/zklaus/20/base 2025-12-04T10:14:41.2433846Z * [new branch] gh/zklaus/20/head -> origin/gh/zklaus/20/head 2025-12-04T10:14:41.2433911Z * [new branch] gh/zklaus/20/orig -> origin/gh/zklaus/20/orig 2025-12-04T10:14:41.2434003Z * [new branch] gh/zklaus/21/base -> origin/gh/zklaus/21/base 2025-12-04T10:14:41.2434069Z * [new branch] gh/zklaus/21/head -> origin/gh/zklaus/21/head 2025-12-04T10:14:41.2434134Z * [new branch] gh/zklaus/21/orig -> origin/gh/zklaus/21/orig 2025-12-04T10:14:41.2434201Z * [new branch] gh/zklaus/22/base -> origin/gh/zklaus/22/base 2025-12-04T10:14:41.2434267Z * [new branch] gh/zklaus/22/head -> origin/gh/zklaus/22/head 2025-12-04T10:14:41.2434332Z * [new branch] gh/zklaus/22/orig -> origin/gh/zklaus/22/orig 2025-12-04T10:14:41.2434397Z * [new branch] gh/zklaus/23/base -> origin/gh/zklaus/23/base 2025-12-04T10:14:41.2434464Z * [new branch] gh/zklaus/23/head -> origin/gh/zklaus/23/head 2025-12-04T10:14:41.2434529Z * [new branch] gh/zklaus/23/orig -> origin/gh/zklaus/23/orig 2025-12-04T10:14:41.2434597Z * [new branch] gh/zklaus/24/base -> origin/gh/zklaus/24/base 2025-12-04T10:14:41.2434663Z * [new branch] gh/zklaus/24/head -> origin/gh/zklaus/24/head 2025-12-04T10:14:41.2434728Z * [new branch] gh/zklaus/24/orig -> origin/gh/zklaus/24/orig 2025-12-04T10:14:41.2434825Z * [new branch] gh/zou3519/1197/base -> origin/gh/zou3519/1197/base 2025-12-04T10:14:41.2434898Z * [new branch] gh/zou3519/1197/head -> origin/gh/zou3519/1197/head 2025-12-04T10:14:41.2434968Z * [new branch] gh/zou3519/1197/orig -> origin/gh/zou3519/1197/orig 2025-12-04T10:14:41.2435038Z * [new branch] gh/zou3519/1199/base -> origin/gh/zou3519/1199/base 2025-12-04T10:14:41.2435106Z * [new branch] gh/zou3519/1199/head -> origin/gh/zou3519/1199/head 2025-12-04T10:14:41.2435173Z * [new branch] gh/zou3519/1199/orig -> origin/gh/zou3519/1199/orig 2025-12-04T10:14:41.2435243Z * [new branch] gh/zou3519/1200/base -> origin/gh/zou3519/1200/base 2025-12-04T10:14:41.2435311Z * [new branch] gh/zou3519/1200/head -> origin/gh/zou3519/1200/head 2025-12-04T10:14:41.2435378Z * [new branch] gh/zou3519/1200/orig -> origin/gh/zou3519/1200/orig 2025-12-04T10:14:41.2435448Z * [new branch] gh/zou3519/1201/base -> origin/gh/zou3519/1201/base 2025-12-04T10:14:41.2435515Z * [new branch] gh/zou3519/1201/head -> origin/gh/zou3519/1201/head 2025-12-04T10:14:41.2435583Z * [new branch] gh/zou3519/1201/orig -> origin/gh/zou3519/1201/orig 2025-12-04T10:14:41.2435651Z * [new branch] gh/zou3519/1202/base -> origin/gh/zou3519/1202/base 2025-12-04T10:14:41.2435718Z * [new branch] gh/zou3519/1202/head -> origin/gh/zou3519/1202/head 2025-12-04T10:14:41.2435785Z * [new branch] gh/zou3519/1202/orig -> origin/gh/zou3519/1202/orig 2025-12-04T10:14:41.2435855Z * [new branch] gh/zpcore/1/base -> origin/gh/zpcore/1/base 2025-12-04T10:14:41.2435922Z * [new branch] gh/zpcore/1/head -> origin/gh/zpcore/1/head 2025-12-04T10:14:41.2435989Z * [new branch] gh/zpcore/11/base -> origin/gh/zpcore/11/base 2025-12-04T10:14:41.2436058Z * [new branch] gh/zpcore/11/head -> origin/gh/zpcore/11/head 2025-12-04T10:14:41.2436124Z * [new branch] gh/zpcore/11/orig -> origin/gh/zpcore/11/orig 2025-12-04T10:14:41.2436190Z * [new branch] gh/zpcore/12/base -> origin/gh/zpcore/12/base 2025-12-04T10:14:41.2436257Z * [new branch] gh/zpcore/12/head -> origin/gh/zpcore/12/head 2025-12-04T10:14:41.2436323Z * [new branch] gh/zpcore/12/orig -> origin/gh/zpcore/12/orig 2025-12-04T10:14:41.2436390Z * [new branch] gh/zpcore/13/base -> origin/gh/zpcore/13/base 2025-12-04T10:14:41.2436482Z * [new branch] gh/zpcore/13/head -> origin/gh/zpcore/13/head 2025-12-04T10:14:41.2436547Z * [new branch] gh/zpcore/13/orig -> origin/gh/zpcore/13/orig 2025-12-04T10:14:41.2436615Z * [new branch] gh/zpcore/14/base -> origin/gh/zpcore/14/base 2025-12-04T10:14:41.2436680Z * [new branch] gh/zpcore/14/head -> origin/gh/zpcore/14/head 2025-12-04T10:14:41.2436747Z * [new branch] gh/zpcore/14/orig -> origin/gh/zpcore/14/orig 2025-12-04T10:14:41.2436813Z * [new branch] gh/zpcore/15/base -> origin/gh/zpcore/15/base 2025-12-04T10:14:41.2436878Z * [new branch] gh/zpcore/15/head -> origin/gh/zpcore/15/head 2025-12-04T10:14:41.2436944Z * [new branch] gh/zpcore/15/orig -> origin/gh/zpcore/15/orig 2025-12-04T10:14:41.2437010Z * [new branch] gh/zpcore/2/base -> origin/gh/zpcore/2/base 2025-12-04T10:14:41.2437078Z * [new branch] gh/zpcore/2/head -> origin/gh/zpcore/2/head 2025-12-04T10:14:41.2437144Z * [new branch] gh/zpcore/21/base -> origin/gh/zpcore/21/base 2025-12-04T10:14:41.2437211Z * [new branch] gh/zpcore/21/head -> origin/gh/zpcore/21/head 2025-12-04T10:14:41.2437279Z * [new branch] gh/zpcore/21/orig -> origin/gh/zpcore/21/orig 2025-12-04T10:14:41.2437378Z * [new branch] gh/zpcore/22/base -> origin/gh/zpcore/22/base 2025-12-04T10:14:41.2437446Z * [new branch] gh/zpcore/22/head -> origin/gh/zpcore/22/head 2025-12-04T10:14:41.2437512Z * [new branch] gh/zpcore/22/orig -> origin/gh/zpcore/22/orig 2025-12-04T10:14:41.2437578Z * [new branch] gh/zpcore/23/base -> origin/gh/zpcore/23/base 2025-12-04T10:14:41.2437644Z * [new branch] gh/zpcore/23/head -> origin/gh/zpcore/23/head 2025-12-04T10:14:41.2437711Z * [new branch] gh/zpcore/23/orig -> origin/gh/zpcore/23/orig 2025-12-04T10:14:41.2437777Z * [new branch] gh/zpcore/24/base -> origin/gh/zpcore/24/base 2025-12-04T10:14:41.2437844Z * [new branch] gh/zpcore/24/head -> origin/gh/zpcore/24/head 2025-12-04T10:14:41.2437911Z * [new branch] gh/zpcore/24/orig -> origin/gh/zpcore/24/orig 2025-12-04T10:14:41.2437980Z * [new branch] gh/zpcore/25/base -> origin/gh/zpcore/25/base 2025-12-04T10:14:41.2438046Z * [new branch] gh/zpcore/25/head -> origin/gh/zpcore/25/head 2025-12-04T10:14:41.2438111Z * [new branch] gh/zpcore/25/orig -> origin/gh/zpcore/25/orig 2025-12-04T10:14:41.2438179Z * [new branch] gh/zpcore/26/base -> origin/gh/zpcore/26/base 2025-12-04T10:14:41.2438245Z * [new branch] gh/zpcore/26/head -> origin/gh/zpcore/26/head 2025-12-04T10:14:41.2438313Z * [new branch] gh/zpcore/26/orig -> origin/gh/zpcore/26/orig 2025-12-04T10:14:41.2438380Z * [new branch] gh/zpcore/27/base -> origin/gh/zpcore/27/base 2025-12-04T10:14:41.2438446Z * [new branch] gh/zpcore/27/head -> origin/gh/zpcore/27/head 2025-12-04T10:14:41.2438512Z * [new branch] gh/zpcore/27/orig -> origin/gh/zpcore/27/orig 2025-12-04T10:14:41.2438581Z * [new branch] gh/zpcore/28/base -> origin/gh/zpcore/28/base 2025-12-04T10:14:41.2438648Z * [new branch] gh/zpcore/28/head -> origin/gh/zpcore/28/head 2025-12-04T10:14:41.2438713Z * [new branch] gh/zpcore/28/orig -> origin/gh/zpcore/28/orig 2025-12-04T10:14:41.2438781Z * [new branch] gh/zpcore/3/base -> origin/gh/zpcore/3/base 2025-12-04T10:14:41.2438846Z * [new branch] gh/zpcore/3/head -> origin/gh/zpcore/3/head 2025-12-04T10:14:41.2438912Z * [new branch] gh/zpcore/4/base -> origin/gh/zpcore/4/base 2025-12-04T10:14:41.2439008Z * [new branch] gh/zpcore/4/head -> origin/gh/zpcore/4/head 2025-12-04T10:14:41.2439074Z * [new branch] gh/zpcore/5/base -> origin/gh/zpcore/5/base 2025-12-04T10:14:41.2439138Z * [new branch] gh/zpcore/5/head -> origin/gh/zpcore/5/head 2025-12-04T10:14:41.2439206Z * [new branch] gh/zpcore/6/base -> origin/gh/zpcore/6/base 2025-12-04T10:14:41.2439273Z * [new branch] gh/zpcore/6/head -> origin/gh/zpcore/6/head 2025-12-04T10:14:41.2439339Z * [new branch] gh/zpcore/7/base -> origin/gh/zpcore/7/base 2025-12-04T10:14:41.2439405Z * [new branch] gh/zpcore/7/head -> origin/gh/zpcore/7/head 2025-12-04T10:14:41.2439470Z * [new branch] gh/zpcore/8/base -> origin/gh/zpcore/8/base 2025-12-04T10:14:41.2439535Z * [new branch] gh/zpcore/8/head -> origin/gh/zpcore/8/head 2025-12-04T10:14:41.2439606Z * [new branch] google-main -> origin/google-main 2025-12-04T10:14:41.2439691Z * [new branch] guangyey/external_stream -> origin/guangyey/external_stream 2025-12-04T10:14:41.2439763Z * [new branch] guangyey/test_2025 -> origin/guangyey/test_2025 2025-12-04T10:14:41.2439929Z * [new branch] guilhermeleobas/cherry-pick-55d87d9dfd9 -> origin/guilhermeleobas/cherry-pick-55d87d9dfd9 2025-12-04T10:14:41.2440046Z * [new branch] hameerabbasi/complex_tensor_subclass -> origin/hameerabbasi/complex_tensor_subclass 2025-12-04T10:14:41.2440183Z * [new branch] hameerabbasi/fix-ctensor-gradcheck-tests -> origin/hameerabbasi/fix-ctensor-gradcheck-tests 2025-12-04T10:14:41.2440289Z * [new branch] hameerabbasi/gradcheck-allclose -> origin/hameerabbasi/gradcheck-allclose 2025-12-04T10:14:41.2440354Z * [new branch] hc_baseline -> origin/hc_baseline 2025-12-04T10:14:41.2440417Z * [new branch] hhh_rand -> origin/hhh_rand 2025-12-04T10:14:41.2440478Z * [new branch] huba/f1 -> origin/huba/f1 2025-12-04T10:14:41.2440701Z * [new branch] increase-timeout-linux-jammy-cuda12_8-py3_10-gcc11-test -> origin/increase-timeout-linux-jammy-cuda12_8-py3_10-gcc11-test 2025-12-04T10:14:41.2440767Z * [new branch] inlining -> origin/inlining 2025-12-04T10:14:41.2440837Z * [new branch] inlining-ezyang -> origin/inlining-ezyang 2025-12-04T10:14:41.2440921Z * [new branch] install-torchao-0.13.0 -> origin/install-torchao-0.13.0 2025-12-04T10:14:41.2441097Z * [new branch] instrument-trunk-pull-linux-with-job-test-filters -> origin/instrument-trunk-pull-linux-with-job-test-filters 2025-12-04T10:14:41.2441166Z * [new branch] invoke-subgraph -> origin/invoke-subgraph 2025-12-04T10:14:41.2441230Z * [new branch] issue#58739 -> origin/issue#58739 2025-12-04T10:14:41.2441308Z * [new branch] jainapurva-patch-1 -> origin/jainapurva-patch-1 2025-12-04T10:14:41.2441367Z * [new branch] jathu/o3 -> origin/jathu/o3 2025-12-04T10:14:41.2441428Z * [new branch] jathu/sve -> origin/jathu/sve 2025-12-04T10:14:41.2441550Z * [new branch] jcaip/test-cusparselt-version-0.6.2 -> origin/jcaip/test-cusparselt-version-0.6.2 2025-12-04T10:14:41.2441653Z * [new branch] jcaip/update-cusparselt-0.6.2 -> origin/jcaip/update-cusparselt-0.6.2 2025-12-04T10:14:41.2441763Z * [new branch] jiannanWang/memorysnapshot_filter -> origin/jiannanWang/memorysnapshot_filter 2025-12-04T10:14:41.2441870Z * [new branch] jiannanWang/profilerstepwarning -> origin/jiannanWang/profilerstepwarning 2025-12-04T10:14:41.2441953Z * [new branch] jithunnair-amd-patch-1 -> origin/jithunnair-amd-patch-1 2025-12-04T10:14:41.2442081Z * [new branch] jithunnair-amd-patch-10 -> origin/jithunnair-amd-patch-10 2025-12-04T10:14:41.2442160Z * [new branch] jithunnair-amd-patch-2 -> origin/jithunnair-amd-patch-2 2025-12-04T10:14:41.2442239Z * [new branch] jithunnair-amd-patch-3 -> origin/jithunnair-amd-patch-3 2025-12-04T10:14:41.2442320Z * [new branch] jithunnair-amd-patch-4 -> origin/jithunnair-amd-patch-4 2025-12-04T10:14:41.2442398Z * [new branch] jithunnair-amd-patch-5 -> origin/jithunnair-amd-patch-5 2025-12-04T10:14:41.2442475Z * [new branch] jithunnair-amd-patch-6 -> origin/jithunnair-amd-patch-6 2025-12-04T10:14:41.2442553Z * [new branch] jithunnair-amd-patch-7 -> origin/jithunnair-amd-patch-7 2025-12-04T10:14:41.2442629Z * [new branch] jithunnair-amd-patch-8 -> origin/jithunnair-amd-patch-8 2025-12-04T10:14:41.2442708Z * [new branch] jithunnair-amd-patch-9 -> origin/jithunnair-amd-patch-9 2025-12-04T10:14:41.2442785Z * [new branch] justinchu/native-qdq -> origin/justinchu/native-qdq 2025-12-04T10:14:41.2442857Z * [new branch] kainan666/xlf_debug -> origin/kainan666/xlf_debug 2025-12-04T10:14:41.2442921Z * [new branch] kainan_test -> origin/kainan_test 2025-12-04T10:14:41.2443038Z * [new branch] larryliu0820-patch-1 -> origin/larryliu0820-patch-1 2025-12-04T10:14:41.2443143Z * [new branch] leslie/test_group_gemm_epilogues -> origin/leslie/test_group_gemm_epilogues 2025-12-04T10:14:41.2443246Z * [new branch] lessw2020/fix_cutlass_cache_error -> origin/lessw2020/fix_cutlass_cache_error 2025-12-04T10:14:41.2443324Z * [new branch] liaoxuan/shm_all_reduce -> origin/liaoxuan/shm_all_reduce 2025-12-04T10:14:41.2443424Z * [new branch] liaoxuan/test_fa_disable_softmax -> origin/liaoxuan/test_fa_disable_softmax 2025-12-04T10:14:41.2443504Z * [new branch] liaoxuan/test_int8_sdpa -> origin/liaoxuan/test_int8_sdpa 2025-12-04T10:14:41.2443572Z * [new branch] llama4-stable -> origin/llama4-stable 2025-12-04T10:14:41.2443639Z * [new branch] lts/release/1.8 -> origin/lts/release/1.8 2025-12-04T10:14:41.2443715Z * [new branch] lucaskabela/#94773 -> origin/lucaskabela/#94773 2025-12-04T10:14:41.2443791Z * [new branch] lucaskabela/fix_164876 -> origin/lucaskabela/fix_164876 2025-12-04T10:14:41.2443872Z * [new branch] lucaskabela/flop_counter -> origin/lucaskabela/flop_counter 2025-12-04T10:14:41.2443967Z * [new branch] lucaskabela/func_under_decomp -> origin/lucaskabela/func_under_decomp 2025-12-04T10:14:41.2444070Z * [new branch] lucaskabela/functional_in_dynamo -> origin/lucaskabela/functional_in_dynamo 2025-12-04T10:14:41.2444195Z * [new branch] lucaskabela/install_params_as_graph_attr -> origin/lucaskabela/install_params_as_graph_attr 2025-12-04T10:14:41.2444310Z * [new branch] lucaskabela/parameters_as_graph_attr -> origin/lucaskabela/parameters_as_graph_attr 2025-12-04T10:14:41.2444440Z * [new branch] lucaskabela/remove_aot_dispatcher_metadata -> origin/lucaskabela/remove_aot_dispatcher_metadata 2025-12-04T10:14:41.2444522Z * [new branch] lucaskabela/rnn_decomp -> origin/lucaskabela/rnn_decomp 2025-12-04T10:14:41.2444613Z * [new branch] lucaskabela/typing_backends -> origin/lucaskabela/typing_backends 2025-12-04T10:14:41.2444710Z * [new branch] lucaskabela/typing_ctx_manager -> origin/lucaskabela/typing_ctx_manager 2025-12-04T10:14:41.2444805Z * [new branch] lucaskabela/typing_nn_module -> origin/lucaskabela/typing_nn_module 2025-12-04T10:14:41.2444905Z * [new branch] lucaskabela/typing_user_defined -> origin/lucaskabela/typing_user_defined 2025-12-04T10:14:41.2445028Z * [new branch] lucaskabela/typing_variables -> origin/lucaskabela/typing_variables 2025-12-04T10:14:41.2445137Z * [new branch] lucaskabela/typing_variables_dicts -> origin/lucaskabela/typing_variables_dicts 2025-12-04T10:14:41.2445256Z * [new branch] lucaskabela/typing_variables_functions -> origin/lucaskabela/typing_variables_functions 2025-12-04T10:14:41.2445365Z * [new branch] lucaskabela/typing_variables_lists -> origin/lucaskabela/typing_variables_lists 2025-12-04T10:14:41.2445439Z * [new branch] lw/torch_box_by_ref -> origin/lw/torch_box_by_ref 2025-12-04T10:14:41.2445499Z * [new branch] main -> origin/main 2025-12-04T10:14:41.2445569Z * [new branch] malfet-patch-1 -> origin/malfet-patch-1 2025-12-04T10:14:41.2445639Z * [new branch] malfet-patch-2 -> origin/malfet-patch-2 2025-12-04T10:14:41.2445707Z * [new branch] malfet-patch-3 -> origin/malfet-patch-3 2025-12-04T10:14:41.2445775Z * [new branch] malfet-patch-4 -> origin/malfet-patch-4 2025-12-04T10:14:41.2445840Z * [new branch] malfet-patch-5 -> origin/malfet-patch-5 2025-12-04T10:14:41.2445904Z * [new branch] malfet-patch-6 -> origin/malfet-patch-6 2025-12-04T10:14:41.2446000Z * [new branch] malfet-patch-7 -> origin/malfet-patch-7 2025-12-04T10:14:41.2446067Z * [new branch] malfet-patch-8 -> origin/malfet-patch-8 2025-12-04T10:14:41.2446140Z * [new branch] malfet/add-3.14-ci -> origin/malfet/add-3.14-ci 2025-12-04T10:14:41.2446301Z * [new branch] malfet/be-do-not-make-typos-in-build-artifacts -> origin/malfet/be-do-not-make-typos-in-build-artifacts 2025-12-04T10:14:41.2446465Z * [new branch] malfet/be-move-more-settings-to-checkout-pytorch -> origin/malfet/be-move-more-settings-to-checkout-pytorch 2025-12-04T10:14:41.2446591Z * [new branch] malfet/be-remove-misisng-neon-headers -> origin/malfet/be-remove-misisng-neon-headers 2025-12-04T10:14:41.2446688Z * [new branch] malfet/mps-implement-col2im -> origin/malfet/mps-implement-col2im 2025-12-04T10:14:41.2446803Z * [new branch] manuel/aoti_metal_shimify-thread_safe -> origin/manuel/aoti_metal_shimify-thread_safe 2025-12-04T10:14:41.2446894Z * [new branch] manuel/inductor_link_openmp -> origin/manuel/inductor_link_openmp 2025-12-04T10:14:41.2446970Z * [new branch] masnesral/metaconda -> origin/masnesral/metaconda 2025-12-04T10:14:41.2447044Z * [new branch] mem_profiler_flaky_fix -> origin/mem_profiler_flaky_fix 2025-12-04T10:14:41.2447122Z * [new branch] mem_profiler_stack_trace -> origin/mem_profiler_stack_trace 2025-12-04T10:14:41.2447198Z * [new branch] memory_profiler_stack -> origin/memory_profiler_stack 2025-12-04T10:14:41.2447272Z * [new branch] metascroy-patch-1 -> origin/metascroy-patch-1 2025-12-04T10:14:41.2447337Z * [new branch] mingw_posix -> origin/mingw_posix 2025-12-04T10:14:41.2447411Z * [new branch] mlazos/S429861-debug -> origin/mlazos/S429861-debug 2025-12-04T10:14:41.2447474Z * [new branch] mlazos/aa -> origin/mlazos/aa 2025-12-04T10:14:41.2447538Z * [new branch] mlazos/acts -> origin/mlazos/acts 2025-12-04T10:14:41.2447610Z * [new branch] mlazos/arg-renames -> origin/mlazos/arg-renames 2025-12-04T10:14:41.2447688Z * [new branch] mlazos/bad-cudagraphs -> origin/mlazos/bad-cudagraphs 2025-12-04T10:14:41.2447790Z * [new branch] mlazos/baseline-graph-breaks -> origin/mlazos/baseline-graph-breaks 2025-12-04T10:14:41.2447862Z * [new branch] mlazos/beta-tensor -> origin/mlazos/beta-tensor 2025-12-04T10:14:41.2447958Z * [new branch] mlazos/buffers -> origin/mlazos/buffers 2025-12-04T10:14:41.2448026Z * [new branch] mlazos/buffers2 -> origin/mlazos/buffers2 2025-12-04T10:14:41.2448091Z * [new branch] mlazos/buffers3 -> origin/mlazos/buffers3 2025-12-04T10:14:41.2448155Z * [new branch] mlazos/bwd -> origin/mlazos/bwd 2025-12-04T10:14:41.2448227Z * [new branch] mlazos/combo-test -> origin/mlazos/combo-test 2025-12-04T10:14:41.2448298Z * [new branch] mlazos/ctx-cleanup -> origin/mlazos/ctx-cleanup 2025-12-04T10:14:41.2448371Z * [new branch] mlazos/cuda-cmd-log -> origin/mlazos/cuda-cmd-log 2025-12-04T10:14:41.2448452Z * [new branch] mlazos/cudagraph-tests -> origin/mlazos/cudagraph-tests 2025-12-04T10:14:41.2448552Z * [new branch] mlazos/cudagraphs-measurement -> origin/mlazos/cudagraphs-measurement 2025-12-04T10:14:41.2448627Z * [new branch] mlazos/cutlass-test -> origin/mlazos/cutlass-test 2025-12-04T10:14:41.2448708Z * [new branch] mlazos/cutlass-topo-bug -> origin/mlazos/cutlass-topo-bug 2025-12-04T10:14:41.2448785Z * [new branch] mlazos/dataclass-proxy -> origin/mlazos/dataclass-proxy 2025-12-04T10:14:41.2448883Z * [new branch] mlazos/dc-attrs -> origin/mlazos/dc-attrs 2025-12-04T10:14:41.2448954Z * [new branch] mlazos/dc-helion -> origin/mlazos/dc-helion 2025-12-04T10:14:41.2449021Z * [new branch] mlazos/dict-fix -> origin/mlazos/dict-fix 2025-12-04T10:14:41.2449093Z * [new branch] mlazos/disable-tf -> origin/mlazos/disable-tf 2025-12-04T10:14:41.2449162Z * [new branch] mlazos/dupe-fix -> origin/mlazos/dupe-fix 2025-12-04T10:14:41.2449229Z * [new branch] mlazos/dyn-batch -> origin/mlazos/dyn-batch 2025-12-04T10:14:41.2449293Z * [new branch] mlazos/evt -> origin/mlazos/evt 2025-12-04T10:14:41.2449373Z * [new branch] mlazos/extract-examples -> origin/mlazos/extract-examples 2025-12-04T10:14:41.2449443Z * [new branch] mlazos/foreach-op -> origin/mlazos/foreach-op 2025-12-04T10:14:41.2449507Z * [new branch] mlazos/fp8 -> origin/mlazos/fp8 2025-12-04T10:14:41.2449574Z * [new branch] mlazos/fp8-bias -> origin/mlazos/fp8-bias 2025-12-04T10:14:41.2449651Z * [new branch] mlazos/fp8-bias-fusion -> origin/mlazos/fp8-bias-fusion 2025-12-04T10:14:41.2449720Z * [new branch] mlazos/fp8-fixes -> origin/mlazos/fp8-fixes 2025-12-04T10:14:41.2449786Z * [new branch] mlazos/freezing -> origin/mlazos/freezing 2025-12-04T10:14:41.2449852Z * [new branch] mlazos/h-comp -> origin/mlazos/h-comp 2025-12-04T10:14:41.2449921Z * [new branch] mlazos/h-comp2 -> origin/mlazos/h-comp2 2025-12-04T10:14:41.2449986Z * [new branch] mlazos/hash-hop -> origin/mlazos/hash-hop 2025-12-04T10:14:41.2450046Z * [new branch] mlazos/hc -> origin/mlazos/hc 2025-12-04T10:14:41.2450116Z * [new branch] mlazos/hc-cycles -> origin/mlazos/hc-cycles 2025-12-04T10:14:41.2450183Z * [new branch] mlazos/hc-fixes -> origin/mlazos/hc-fixes 2025-12-04T10:14:41.2450250Z * [new branch] mlazos/hc-fixes3 -> origin/mlazos/hc-fixes3 2025-12-04T10:14:41.2450317Z * [new branch] mlazos/hc-fixes4 -> origin/mlazos/hc-fixes4 2025-12-04T10:14:41.2450381Z * [new branch] mlazos/hc-hf -> origin/mlazos/hc-hf 2025-12-04T10:14:41.2450445Z * [new branch] mlazos/hc-mut -> origin/mlazos/hc-mut 2025-12-04T10:14:41.2450533Z * [new branch] mlazos/hc10 -> origin/mlazos/hc10 2025-12-04T10:14:41.2450633Z * [new branch] mlazos/hc11 -> origin/mlazos/hc11 2025-12-04T10:14:41.2450697Z * [new branch] mlazos/hc12 -> origin/mlazos/hc12 2025-12-04T10:14:41.2450758Z * [new branch] mlazos/hc13 -> origin/mlazos/hc13 2025-12-04T10:14:41.2450820Z * [new branch] mlazos/hc14 -> origin/mlazos/hc14 2025-12-04T10:14:41.2450879Z * [new branch] mlazos/hc15 -> origin/mlazos/hc15 2025-12-04T10:14:41.2450940Z * [new branch] mlazos/hc2 -> origin/mlazos/hc2 2025-12-04T10:14:41.2451001Z * [new branch] mlazos/hc4 -> origin/mlazos/hc4 2025-12-04T10:14:41.2451061Z * [new branch] mlazos/hc5 -> origin/mlazos/hc5 2025-12-04T10:14:41.2451123Z * [new branch] mlazos/hc6 -> origin/mlazos/hc6 2025-12-04T10:14:41.2451183Z * [new branch] mlazos/hc7 -> origin/mlazos/hc7 2025-12-04T10:14:41.2451242Z * [new branch] mlazos/hc8 -> origin/mlazos/hc8 2025-12-04T10:14:41.2451303Z * [new branch] mlazos/hc9 -> origin/mlazos/hc9 2025-12-04T10:14:41.2451373Z * [new branch] mlazos/hc_baseline2 -> origin/mlazos/hc_baseline2 2025-12-04T10:14:41.2451497Z * [new branch] mlazos/inductor-streams -> origin/mlazos/inductor-streams 2025-12-04T10:14:41.2451560Z * [new branch] mlazos/main -> origin/mlazos/main 2025-12-04T10:14:41.2451621Z * [new branch] mlazos/mcg2 -> origin/mlazos/mcg2 2025-12-04T10:14:41.2451694Z * [new branch] mlazos/meta-guards -> origin/mlazos/meta-guards 2025-12-04T10:14:41.2451795Z * [new branch] mlazos/mlazos/foreach-map-adam -> origin/mlazos/mlazos/foreach-map-adam 2025-12-04T10:14:41.2451892Z * [new branch] mlazos/mlazos/tf-mode-backup -> origin/mlazos/mlazos/tf-mode-backup 2025-12-04T10:14:41.2451960Z * [new branch] mlazos/mod-fix -> origin/mlazos/mod-fix 2025-12-04T10:14:41.2452025Z * [new branch] mlazos/mode-fix -> origin/mlazos/mode-fix 2025-12-04T10:14:41.2452089Z * [new branch] mlazos/offsets -> origin/mlazos/offsets 2025-12-04T10:14:41.2452164Z * [new branch] mlazos/overguarding -> origin/mlazos/overguarding 2025-12-04T10:14:41.2452236Z * [new branch] mlazos/proxy-ctors -> origin/mlazos/proxy-ctors 2025-12-04T10:14:41.2452303Z * [new branch] mlazos/quant-fix -> origin/mlazos/quant-fix 2025-12-04T10:14:41.2452374Z * [new branch] mlazos/resnet-fix -> origin/mlazos/resnet-fix 2025-12-04T10:14:41.2452446Z * [new branch] mlazos/rm-buf-names -> origin/mlazos/rm-buf-names 2025-12-04T10:14:41.2452512Z * [new branch] mlazos/rm-code -> origin/mlazos/rm-code 2025-12-04T10:14:41.2452580Z * [new branch] mlazos/rm-spam -> origin/mlazos/rm-spam 2025-12-04T10:14:41.2452641Z * [new branch] mlazos/rtp -> origin/mlazos/rtp 2025-12-04T10:14:41.2452718Z * [new branch] mlazos/static-idx-dbg -> origin/mlazos/static-idx-dbg 2025-12-04T10:14:41.2452806Z * [new branch] mlazos/static-inputs-log -> origin/mlazos/static-inputs-log 2025-12-04T10:14:41.2452871Z * [new branch] mlazos/stests -> origin/mlazos/stests 2025-12-04T10:14:41.2452941Z * [new branch] mlazos/stream-ops -> origin/mlazos/stream-ops 2025-12-04T10:14:41.2453007Z * [new branch] mlazos/td-fix2 -> origin/mlazos/td-fix2 2025-12-04T10:14:41.2453083Z * [new branch] mlazos/tensor-hasattr2 -> origin/mlazos/tensor-hasattr2 2025-12-04T10:14:41.2453146Z * [new branch] mlazos/test -> origin/mlazos/test 2025-12-04T10:14:41.2453249Z * [new branch] mlazos/tf-mode -> origin/mlazos/tf-mode 2025-12-04T10:14:41.2453327Z * [new branch] mlazos/tf-mode-backup2 -> origin/mlazos/tf-mode-backup2 2025-12-04T10:14:41.2453402Z * [new branch] mlazos/tf-mode-reland -> origin/mlazos/tf-mode-reland 2025-12-04T10:14:41.2453479Z * [new branch] mlazos/tf-mode-reland2 -> origin/mlazos/tf-mode-reland2 2025-12-04T10:14:41.2453554Z * [new branch] mlazos/tf-mode-reland3 -> origin/mlazos/tf-mode-reland3 2025-12-04T10:14:41.2453631Z * [new branch] mlazos/triton-no-epi -> origin/mlazos/triton-no-epi 2025-12-04T10:14:41.2453701Z * [new branch] mlazos/tune-proto -> origin/mlazos/tune-proto 2025-12-04T10:14:41.2453774Z * [new branch] mlazos/tuple-fixes -> origin/mlazos/tuple-fixes 2025-12-04T10:14:41.2453848Z * [new branch] mlazos/tuple-fixes2 -> origin/mlazos/tuple-fixes2 2025-12-04T10:14:41.2453925Z * [new branch] mlazos/tuple-handling -> origin/mlazos/tuple-handling 2025-12-04T10:14:41.2454003Z * [new branch] mlazos/user-stream-base -> origin/mlazos/user-stream-base 2025-12-04T10:14:41.2454078Z * [new branch] mlazos/user-streams -> origin/mlazos/user-streams 2025-12-04T10:14:41.2454194Z * [new branch] mlazos/user-streams-backup -> origin/mlazos/user-streams-backup 2025-12-04T10:14:41.2454288Z * [new branch] mlazos/user-streams-backup2 -> origin/mlazos/user-streams-backup2 2025-12-04T10:14:41.2454361Z * [new branch] mlazos/vary-beta -> origin/mlazos/vary-beta 2025-12-04T10:14:41.2454430Z * [new branch] mlazos/vary-beta2 -> origin/mlazos/vary-beta2 2025-12-04T10:14:41.2454501Z * [new branch] mlazos/weird-perf1 -> origin/mlazos/weird-perf1 2025-12-04T10:14:41.2454579Z * [new branch] mm_out_dtype_compile -> origin/mm_out_dtype_compile 2025-12-04T10:14:41.2454642Z * [new branch] module-shim -> origin/module-shim 2025-12-04T10:14:41.2454703Z * [new branch] move_config -> origin/move_config 2025-12-04T10:14:41.2454775Z * [new branch] msaroufim/reduce -> origin/msaroufim/reduce 2025-12-04T10:14:41.2454844Z * [new branch] mtia/basic-cmake -> origin/mtia/basic-cmake 2025-12-04T10:14:41.2454946Z * [new branch] mwizak/fix-triton-block-shape -> origin/mwizak/fix-triton-block-shape 2025-12-04T10:14:41.2455012Z * [new branch] my_varlen_backup -> origin/my_varlen_backup 2025-12-04T10:14:41.2455085Z * [new branch] nativert_num_outputs -> origin/nativert_num_outputs 2025-12-04T10:14:41.2455149Z * [new branch] new-codegen -> origin/new-codegen 2025-12-04T10:14:41.2455216Z * [new branch] newtest-base -> origin/newtest-base 2025-12-04T10:14:41.2455288Z * [new branch] ngimel/addmm_dtype -> origin/ngimel/addmm_dtype 2025-12-04T10:14:41.2455354Z * [new branch] ngimel/div_inv -> origin/ngimel/div_inv 2025-12-04T10:14:41.2455430Z * [new branch] ngimel/error_index_list -> origin/ngimel/error_index_list 2025-12-04T10:14:41.2455502Z * [new branch] ngimel/gather_grid -> origin/ngimel/gather_grid 2025-12-04T10:14:41.2455589Z * [new branch] ngimel/gather_grid_release -> origin/ngimel/gather_grid_release 2025-12-04T10:14:41.2455653Z * [new branch] ngimel/gg_new -> origin/ngimel/gg_new 2025-12-04T10:14:41.2455719Z * [new branch] ngimel/hostalloc -> origin/ngimel/hostalloc 2025-12-04T10:14:41.2455788Z * [new branch] ngimel/storage_id -> origin/ngimel/storage_id 2025-12-04T10:14:41.2455848Z * [new branch] nightly -> origin/nightly 2025-12-04T10:14:41.2455991Z * [new branch] nikitaved/addmm_1_rowcol_lt_path_check -> origin/nikitaved/addmm_1_rowcol_lt_path_check 2025-12-04T10:14:41.2456113Z * [new branch] nikitaved/addmm_epilogue_fusions_2d_bias -> origin/nikitaved/addmm_epilogue_fusions_2d_bias 2025-12-04T10:14:41.2456238Z * [new branch] nikitaved/addmm_epilogue_fusions_inductor -> origin/nikitaved/addmm_epilogue_fusions_inductor 2025-12-04T10:14:41.2456359Z * [new branch] nikitaved/addmm_epilogue_fusions_scratch -> origin/nikitaved/addmm_epilogue_fusions_scratch 2025-12-04T10:14:41.2456475Z * [new branch] nikitaved/grad_addmm_epilogue_fusions -> origin/nikitaved/grad_addmm_epilogue_fusions 2025-12-04T10:14:41.2456586Z * [new branch] nikitaved/simpler_can_use_32bit_index -> origin/nikitaved/simpler_can_use_32bit_index 2025-12-04T10:14:41.2456652Z * [new branch] nikitaved/test -> origin/nikitaved/test 2025-12-04T10:14:41.2456776Z * [new branch] nmacchioni-perf-test-async-autotune -> origin/nmacchioni-perf-test-async-autotune 2025-12-04T10:14:41.2456854Z * [new branch] no_distributed_log_spew -> origin/no_distributed_log_spew 2025-12-04T10:14:41.2456920Z * [new branch] nofun-hack -> origin/nofun-hack 2025-12-04T10:14:41.2457014Z * [new branch] norm_bench -> origin/norm_bench 2025-12-04T10:14:41.2457090Z * [new branch] nullplay/fuse_matmul -> origin/nullplay/fuse_matmul 2025-12-04T10:14:41.2457165Z * [new branch] nullplay_fuse_matmul -> origin/nullplay_fuse_matmul 2025-12-04T10:14:41.2457232Z * [new branch] optimizer_test -> origin/optimizer_test 2025-12-04T10:14:41.2457301Z * [new branch] orig/release/1.10 -> origin/orig/release/1.10 2025-12-04T10:14:41.2457369Z * [new branch] orig/release/1.11 -> origin/orig/release/1.11 2025-12-04T10:14:41.2457437Z * [new branch] orig/release/1.12 -> origin/orig/release/1.12 2025-12-04T10:14:41.2457503Z * [new branch] orig/release/1.13 -> origin/orig/release/1.13 2025-12-04T10:14:41.2457570Z * [new branch] orig/release/1.6 -> origin/orig/release/1.6 2025-12-04T10:14:41.2457636Z * [new branch] orig/release/1.7 -> origin/orig/release/1.7 2025-12-04T10:14:41.2457701Z * [new branch] orig/release/1.8 -> origin/orig/release/1.8 2025-12-04T10:14:41.2457767Z * [new branch] orig/release/1.9 -> origin/orig/release/1.9 2025-12-04T10:14:41.2457831Z * [new branch] orig/release/2.0 -> origin/orig/release/2.0 2025-12-04T10:14:41.2457895Z * [new branch] orig/release/2.1 -> origin/orig/release/2.1 2025-12-04T10:14:41.2457962Z * [new branch] orig/release/2.2 -> origin/orig/release/2.2 2025-12-04T10:14:41.2458026Z * [new branch] orig/release/2.3 -> origin/orig/release/2.3 2025-12-04T10:14:41.2458091Z * [new branch] orig/release/2.4 -> origin/orig/release/2.4 2025-12-04T10:14:41.2458156Z * [new branch] orig/release/2.5 -> origin/orig/release/2.5 2025-12-04T10:14:41.2458220Z * [new branch] orig/release/2.6 -> origin/orig/release/2.6 2025-12-04T10:14:41.2458286Z * [new branch] orig/release/2.7 -> origin/orig/release/2.7 2025-12-04T10:14:41.2458351Z * [new branch] orig/release/2.8 -> origin/orig/release/2.8 2025-12-04T10:14:41.2458415Z * [new branch] orig/release/2.9 -> origin/orig/release/2.9 2025-12-04T10:14:41.2458501Z * [new branch] origin/gh/fxdawnn/1/base -> origin/origin/gh/fxdawnn/1/base 2025-12-04T10:14:41.2458583Z * [new branch] origin/gh/fxdawnn/1/orig -> origin/origin/gh/fxdawnn/1/orig 2025-12-04T10:14:41.2458701Z * [new branch] origin/gh/zpcore/14/orig -> origin/origin/gh/zpcore/14/orig 2025-12-04T10:14:41.2458769Z * [new branch] oulgen-patch-1 -> origin/oulgen-patch-1 2025-12-04T10:14:41.2458836Z * [new branch] oulgen-patch-2 -> origin/oulgen-patch-2 2025-12-04T10:14:41.2458904Z * [new branch] oulgen-patch-3 -> origin/oulgen-patch-3 2025-12-04T10:14:41.2458970Z * [new branch] oulgen-patch-4 -> origin/oulgen-patch-4 2025-12-04T10:14:41.2459036Z * [new branch] padded-tensor -> origin/padded-tensor 2025-12-04T10:14:41.2459098Z * [new branch] pca2 -> origin/pca2 2025-12-04T10:14:41.2459172Z * [new branch] per_channel_backup -> origin/per_channel_backup 2025-12-04T10:14:41.2459234Z * [new branch] perf_ops -> origin/perf_ops 2025-12-04T10:14:41.2459299Z * [new branch] perf_ops_2_9 -> origin/perf_ops_2_9 2025-12-04T10:14:41.2459369Z * [new branch] pianpwk-patch-1 -> origin/pianpwk-patch-1 2025-12-04T10:14:41.2459454Z * [new branch] pianpwk/__draft_debug_mode -> origin/pianpwk/__draft_debug_mode 2025-12-04T10:14:41.2459589Z * [new branch] pianpwk/_debug_mode_for_triton_draft -> origin/pianpwk/_debug_mode_for_triton_draft 2025-12-04T10:14:41.2459691Z * [new branch] pianpwk/_debug_nn_module_compile -> origin/pianpwk/_debug_nn_module_compile 2025-12-04T10:14:41.2459776Z * [new branch] pianpwk/_draft_triton_11_3 -> origin/pianpwk/_draft_triton_11_3 2025-12-04T10:14:41.2459869Z * [new branch] pianpwk/_manual_bucket_draft -> origin/pianpwk/_manual_bucket_draft 2025-12-04T10:14:41.2459971Z * [new branch] pianpwk/_profile_w_dispatch_keys -> origin/pianpwk/_profile_w_dispatch_keys 2025-12-04T10:14:41.2460069Z * [new branch] pianpwk/_super_draft_debug_mode -> origin/pianpwk/_super_draft_debug_mode 2025-12-04T10:14:41.2460173Z * [new branch] pianpwk/_unbacked_local_shard_size -> origin/pianpwk/_unbacked_local_shard_size 2025-12-04T10:14:41.2460247Z * [new branch] pianpwk/anomaly_tb -> origin/pianpwk/anomaly_tb 2025-12-04T10:14:41.2460328Z * [new branch] pianpwk/auto_fx_annotate -> origin/pianpwk/auto_fx_annotate 2025-12-04T10:14:41.2460441Z * [new branch] pianpwk/backed_size_oblivious_export -> origin/pianpwk/backed_size_oblivious_export 2025-12-04T10:14:41.2460527Z * [new branch] pianpwk/bert_dynamic_perf -> origin/pianpwk/bert_dynamic_perf 2025-12-04T10:14:41.2460660Z * [new branch] pianpwk/debug_fwd_stack_traces -> origin/pianpwk/debug_fwd_stack_traces 2025-12-04T10:14:41.2460747Z * [new branch] pianpwk/debug_hash_tensor -> origin/pianpwk/debug_hash_tensor 2025-12-04T10:14:41.2460837Z * [new branch] pianpwk/debug_mode_annotate -> origin/pianpwk/debug_mode_annotate 2025-12-04T10:14:41.2460925Z * [new branch] pianpwk/debug_mode_defaults -> origin/pianpwk/debug_mode_defaults 2025-12-04T10:14:41.2461005Z * [new branch] pianpwk/debug_mode_hacks -> origin/pianpwk/debug_mode_hacks 2025-12-04T10:14:41.2461111Z * [new branch] pianpwk/debug_mode_opcall_refactor -> origin/pianpwk/debug_mode_opcall_refactor 2025-12-04T10:14:41.2461199Z * [new branch] pianpwk/debug_mode_show_ids -> origin/pianpwk/debug_mode_show_ids 2025-12-04T10:14:41.2461282Z * [new branch] pianpwk/debug_mode_triton -> origin/pianpwk/debug_mode_triton 2025-12-04T10:14:41.2461376Z * [new branch] pianpwk/debug_show_stack_trace -> origin/pianpwk/debug_show_stack_trace 2025-12-04T10:14:41.2461474Z * [new branch] pianpwk/debug_wait_on_collective -> origin/pianpwk/debug_wait_on_collective 2025-12-04T10:14:41.2461612Z * [new branch] pianpwk/debugmode_compile_tf -> origin/pianpwk/debugmode_compile_tf 2025-12-04T10:14:41.2461736Z * [new branch] pianpwk/dispatch_key_debugging_for_debug -> origin/pianpwk/dispatch_key_debugging_for_debug 2025-12-04T10:14:41.2461842Z * [new branch] pianpwk/draft_debug_mode_tfcompile -> origin/pianpwk/draft_debug_mode_tfcompile 2025-12-04T10:14:41.2461936Z * [new branch] pianpwk/draft_multikernel_nn -> origin/pianpwk/draft_multikernel_nn 2025-12-04T10:14:41.2462048Z * [new branch] pianpwk/draft_multikernel_status_10_5 -> origin/pianpwk/draft_multikernel_status_10_5 2025-12-04T10:14:41.2462140Z * [new branch] pianpwk/dtensor_custom_chunk -> origin/pianpwk/dtensor_custom_chunk 2025-12-04T10:14:41.2462240Z * [new branch] pianpwk/dtensor_unbacked_keypath -> origin/pianpwk/dtensor_unbacked_keypath 2025-12-04T10:14:41.2462319Z * [new branch] pianpwk/event_list_tree -> origin/pianpwk/event_list_tree 2025-12-04T10:14:41.2462401Z * [new branch] pianpwk/false_numel_refs -> origin/pianpwk/false_numel_refs 2025-12-04T10:14:41.2462478Z * [new branch] pianpwk/maybe_guard_rel -> origin/pianpwk/maybe_guard_rel 2025-12-04T10:14:41.2462579Z * [new branch] pianpwk/multikernel_hints_draft -> origin/pianpwk/multikernel_hints_draft 2025-12-04T10:14:41.2462727Z * [new branch] pianpwk/no_size_oblivious_slice_scat -> origin/pianpwk/no_size_oblivious_slice_scat 2025-12-04T10:14:41.2462841Z * [new branch] pianpwk/oblivious_reshape_view_better -> origin/pianpwk/oblivious_reshape_view_better 2025-12-04T10:14:41.2462924Z * [new branch] pianpwk/pre_forward_hook -> origin/pianpwk/pre_forward_hook 2025-12-04T10:14:41.2463029Z * [new branch] pianpwk/skip_python_keys_alternate -> origin/pianpwk/skip_python_keys_alternate 2025-12-04T10:14:41.2463131Z * [new branch] pianpwk/skip_python_keys_in_guards -> origin/pianpwk/skip_python_keys_in_guards 2025-12-04T10:14:41.2463213Z * [new branch] pianpwk/sym_tokens_draft -> origin/pianpwk/sym_tokens_draft 2025-12-04T10:14:41.2463292Z * [new branch] pianpwk/symint_one_hot -> origin/pianpwk/symint_one_hot 2025-12-04T10:14:41.2463404Z * [new branch] pianpwk/test_pointwise_guard_or_false -> origin/pianpwk/test_pointwise_guard_or_false 2025-12-04T10:14:41.2463501Z * [new branch] pianpwk/totally_draft_sym_wrap -> origin/pianpwk/totally_draft_sym_wrap 2025-12-04T10:14:41.2463581Z * [new branch] pianpwk/try_dumb_stuff -> origin/pianpwk/try_dumb_stuff 2025-12-04T10:14:41.2463660Z * [new branch] pianpwk/try_dumb_stuff_2 -> origin/pianpwk/try_dumb_stuff_2 2025-12-04T10:14:41.2463751Z * [new branch] pianpwk/unbacked_dtensor_mm -> origin/pianpwk/unbacked_dtensor_mm 2025-12-04T10:14:41.2463845Z * [new branch] pianpwk/unbacked_tracing_12_2 -> origin/pianpwk/unbacked_tracing_12_2 2025-12-04T10:14:41.2463922Z * [new branch] pianpwk/user_symints -> origin/pianpwk/user_symints 2025-12-04T10:14:41.2464000Z * [new branch] pianpwk/wan21_reshape -> origin/pianpwk/wan21_reshape 2025-12-04T10:14:41.2464091Z * [new branch] piz/fix_partial_backward_1112 -> origin/piz/fix_partial_backward_1112 2025-12-04T10:14:41.2464169Z * [new branch] piz/prop_cache_clean -> origin/piz/prop_cache_clean 2025-12-04T10:14:41.2464237Z * [new branch] pool-separate -> origin/pool-separate 2025-12-04T10:14:41.2464297Z * [new branch] pr-156087 -> origin/pr-156087 2025-12-04T10:14:41.2464357Z * [new branch] pr/131860 -> origin/pr/131860 2025-12-04T10:14:41.2464425Z * [new branch] predispatch_to -> origin/predispatch_to 2025-12-04T10:14:41.2464489Z * [new branch] protect-c17 -> origin/protect-c17 2025-12-04T10:14:41.2464581Z * [new branch] pt-opt-cuda3 -> origin/pt-opt-cuda3 2025-12-04T10:14:41.2464662Z * [new branch] python_compiled_autograd -> origin/python_compiled_autograd 2025-12-04T10:14:41.2464789Z * [new branch] q1l1/fix_device_moved_constant_type_unknown -> origin/q1l1/fix_device_moved_constant_type_unknown 2025-12-04T10:14:41.2464929Z * [new branch] q1l1/fix_wrong_default_type_for_kernel_call_args -> origin/q1l1/fix_wrong_default_type_for_kernel_call_args 2025-12-04T10:14:41.2465009Z * [new branch] qchip/export-D54134695 -> origin/qchip/export-D54134695 2025-12-04T10:14:41.2465081Z * [new branch] quote-pytest_cache -> origin/quote-pytest_cache 2025-12-04T10:14:41.2465177Z * [new branch] reland-accgrad-stream-warn -> origin/reland-accgrad-stream-warn 2025-12-04T10:14:41.2465242Z * [new branch] release/1.10 -> origin/release/1.10 2025-12-04T10:14:41.2465306Z * [new branch] release/1.11 -> origin/release/1.11 2025-12-04T10:14:41.2465369Z * [new branch] release/1.12 -> origin/release/1.12 2025-12-04T10:14:41.2465430Z * [new branch] release/1.13 -> origin/release/1.13 2025-12-04T10:14:41.2465492Z * [new branch] release/1.4 -> origin/release/1.4 2025-12-04T10:14:41.2465583Z * [new branch] release/1.4.1 -> origin/release/1.4.1 2025-12-04T10:14:41.2465644Z * [new branch] release/1.5 -> origin/release/1.5 2025-12-04T10:14:41.2465706Z * [new branch] release/1.6 -> origin/release/1.6 2025-12-04T10:14:41.2465766Z * [new branch] release/1.7 -> origin/release/1.7 2025-12-04T10:14:41.2465825Z * [new branch] release/1.8 -> origin/release/1.8 2025-12-04T10:14:41.2465885Z * [new branch] release/1.9 -> origin/release/1.9 2025-12-04T10:14:41.2465947Z * [new branch] release/2.0 -> origin/release/2.0 2025-12-04T10:14:41.2466006Z * [new branch] release/2.1 -> origin/release/2.1 2025-12-04T10:14:41.2466066Z * [new branch] release/2.2 -> origin/release/2.2 2025-12-04T10:14:41.2466127Z * [new branch] release/2.3 -> origin/release/2.3 2025-12-04T10:14:41.2466185Z * [new branch] release/2.4 -> origin/release/2.4 2025-12-04T10:14:41.2466245Z * [new branch] release/2.5 -> origin/release/2.5 2025-12-04T10:14:41.2466304Z * [new branch] release/2.6 -> origin/release/2.6 2025-12-04T10:14:41.2466364Z * [new branch] release/2.7 -> origin/release/2.7 2025-12-04T10:14:41.2466424Z * [new branch] release/2.8 -> origin/release/2.8 2025-12-04T10:14:41.2466485Z * [new branch] release/2.9 -> origin/release/2.9 2025-12-04T10:14:41.2466548Z * [new branch] release_notes -> origin/release_notes 2025-12-04T10:14:41.2466624Z * [new branch] remove_pyinterpreter -> origin/remove_pyinterpreter 2025-12-04T10:14:41.2466745Z * [new branch] replace-pytorch-labs-20250812-195836 -> origin/replace-pytorch-labs-20250812-195836 2025-12-04T10:14:41.2466865Z * [new branch] replace-pytorch-labs-20250812-200248 -> origin/replace-pytorch-labs-20250812-200248 2025-12-04T10:14:41.2466985Z * [new branch] replace-pytorch-labs-20250812-200324 -> origin/replace-pytorch-labs-20250812-200324 2025-12-04T10:14:41.2467100Z * [new branch] replace-pytorch-labs-20250812-204020 -> origin/replace-pytorch-labs-20250812-204020 2025-12-04T10:14:41.2467227Z * [new branch] revert-131069-gh/krzysztofjordan/1/head -> origin/revert-131069-gh/krzysztofjordan/1/head 2025-12-04T10:14:41.2467365Z * [new branch] revert-131469-gh/andrewor14/51/head -> origin/revert-131469-gh/andrewor14/51/head 2025-12-04T10:14:41.2467466Z * [new branch] revert-152361-gh/fadara01/1/head -> origin/revert-152361-gh/fadara01/1/head 2025-12-04T10:14:41.2467567Z * [new branch] revert-156870-gh/skarjala/3/head -> origin/revert-156870-gh/skarjala/3/head 2025-12-04T10:14:41.2467737Z * [new branch] revert-157914-cherry-pick-157503-by-pytorch_bot_bot_ -> origin/revert-157914-cherry-pick-157503-by-pytorch_bot_bot_ 2025-12-04T10:14:41.2467831Z * [new branch] revert-hoo-invoke-subgraph -> origin/revert-hoo-invoke-subgraph 2025-12-04T10:14:41.2467930Z * [new branch] revert_always_build_distributed -> origin/revert_always_build_distributed 2025-12-04T10:14:41.2467997Z * [new branch] rms_norm_patch -> origin/rms_norm_patch 2025-12-04T10:14:41.2468093Z * [new branch] ruisi/fix_all_to_all_estimation -> origin/ruisi/fix_all_to_all_estimation 2025-12-04T10:14:41.2468179Z * [new branch] ruisi/fix_comm_estimation -> origin/ruisi/fix_comm_estimation 2025-12-04T10:14:41.2468283Z * [new branch] ruisi/fix_dynamic_shape_estimation -> origin/ruisi/fix_dynamic_shape_estimation 2025-12-04T10:14:41.2468408Z * [new branch] ruisi/fix_llama3_autobucketing -> origin/ruisi/fix_llama3_autobucketing 2025-12-04T10:14:41.2468513Z * [new branch] ruisi/fix_manual_bucketing_ep_pass -> origin/ruisi/fix_manual_bucketing_ep_pass 2025-12-04T10:14:41.2473897Z * [new branch] ruisi/manual_bucket_pass -> origin/ruisi/manual_bucket_pass 2025-12-04T10:14:41.2474058Z * [new branch] ryanguo99/cleanup-dynamo-expected-failures -> origin/ryanguo99/cleanup-dynamo-expected-failures 2025-12-04T10:14:41.2474150Z * [new branch] ryanguo99/fix-closure-var -> origin/ryanguo99/fix-closure-var 2025-12-04T10:14:41.2474233Z * [new branch] rzou/faketensor_bench -> origin/rzou/faketensor_bench 2025-12-04T10:14:41.2474297Z * [new branch] rzou/njt -> origin/rzou/njt 2025-12-04T10:14:41.2474359Z * [new branch] rzou/pca -> origin/rzou/pca 2025-12-04T10:14:41.2474425Z * [new branch] rzou/realprop -> origin/rzou/realprop 2025-12-04T10:14:41.2474493Z * [new branch] samplevllm -> origin/samplevllm 2025-12-04T10:14:41.2474659Z * [new branch] sanchitintel/weird_thing_with_test_cpu_select_algorithm -> origin/sanchitintel/weird_thing_with_test_cpu_select_algorithm 2025-12-04T10:14:41.2474752Z * [new branch] sapling-pr-archive-SS-JIA -> origin/sapling-pr-archive-SS-JIA 2025-12-04T10:14:41.2474866Z * [new branch] sapling-pr-archive-tushar00jain -> origin/sapling-pr-archive-tushar00jain 2025-12-04T10:14:41.2474926Z * [new branch] save -> origin/save 2025-12-04T10:14:41.2474989Z * [new branch] scaled_mm -> origin/scaled_mm 2025-12-04T10:14:41.2475055Z * [new branch] scan_attempt -> origin/scan_attempt 2025-12-04T10:14:41.2475117Z * [new branch] sdym/2.5.1 -> origin/sdym/2.5.1 2025-12-04T10:14:41.2475225Z * [new branch] sekyondaMeta-dynamoconfig-fix -> origin/sekyondaMeta-dynamoconfig-fix 2025-12-04T10:14:41.2475309Z * [new branch] shengf/fx-xform-perf -> origin/shengf/fx-xform-perf 2025-12-04T10:14:41.2475385Z * [new branch] shoumikhin-patch-1 -> origin/shoumikhin-patch-1 2025-12-04T10:14:41.2475459Z * [new branch] solve-accuracy-fix -> origin/solve-accuracy-fix 2025-12-04T10:14:41.2475539Z * [new branch] some_rocm_inductor_skips -> origin/some_rocm_inductor_skips 2025-12-04T10:14:41.2475619Z * [new branch] soulitzer/stash-tls-ac -> origin/soulitzer/stash-tls-ac 2025-12-04T10:14:41.2475760Z * [new branch] sparse-mm-bf16-support -> origin/sparse-mm-bf16-support 2025-12-04T10:14:41.2475833Z * [new branch] starterTaskUpdate -> origin/starterTaskUpdate 2025-12-04T10:14:41.2475892Z * [new branch] suo -> origin/suo 2025-12-04T10:14:41.2475957Z * [new branch] sve-poc -> origin/sve-poc 2025-12-04T10:14:41.2476019Z * [new branch] switch-bn -> origin/switch-bn 2025-12-04T10:14:41.2476110Z * [new branch] sy_annotation_in_autograd_hop -> origin/sy_annotation_in_autograd_hop 2025-12-04T10:14:41.2476180Z * [new branch] sy_aot_eager_record -> origin/sy_aot_eager_record 2025-12-04T10:14:41.2476248Z * [new branch] sy_custom_bucketing -> origin/sy_custom_bucketing 2025-12-04T10:14:41.2476315Z * [new branch] sy_debug_mode_test -> origin/sy_debug_mode_test 2025-12-04T10:14:41.2476383Z * [new branch] sy_deserialize -> origin/sy_deserialize 2025-12-04T10:14:41.2476449Z * [new branch] sy_dump_gm_code -> origin/sy_dump_gm_code 2025-12-04T10:14:41.2476510Z * [new branch] sy_exp -> origin/sy_exp 2025-12-04T10:14:41.2476582Z * [new branch] sy_export_annotation -> origin/sy_export_annotation 2025-12-04T10:14:41.2476695Z * [new branch] sy_invoke_subgraph -> origin/sy_invoke_subgraph 2025-12-04T10:14:41.2476763Z * [new branch] sy_kernel_bw_name -> origin/sy_kernel_bw_name 2025-12-04T10:14:41.2476827Z * [new branch] sy_multi_arch -> origin/sy_multi_arch 2025-12-04T10:14:41.2476893Z * [new branch] sy_nn_module_stack -> origin/sy_nn_module_stack 2025-12-04T10:14:41.2476962Z * [new branch] sy_original_dtensor -> origin/sy_original_dtensor 2025-12-04T10:14:41.2477029Z * [new branch] sy_profiler_cia -> origin/sy_profiler_cia 2025-12-04T10:14:41.2477092Z * [new branch] symm_mem_sync -> origin/symm_mem_sync 2025-12-04T10:14:41.2477175Z * [new branch] sympy-bottleneck-repro -> origin/sympy-bottleneck-repro 2025-12-04T10:14:41.2477252Z * [new branch] tensordict_integration -> origin/tensordict_integration 2025-12-04T10:14:41.2477332Z * [new branch] test-move-conda-builds -> origin/test-move-conda-builds 2025-12-04T10:14:41.2477394Z * [new branch] test-old -> origin/test-old 2025-12-04T10:14:41.2477458Z * [new branch] test/bmm_heur -> origin/test/bmm_heur 2025-12-04T10:14:41.2477553Z * [new branch] tianren/customOp_autotune_fix -> origin/tianren/customOp_autotune_fix 2025-12-04T10:14:41.2477665Z * [new branch] tianren/customOp_enable_max_autotune -> origin/tianren/customOp_enable_max_autotune 2025-12-04T10:14:41.2477748Z * [new branch] tianren/customOp_fusion -> origin/tianren/customOp_fusion 2025-12-04T10:14:41.2477870Z * [new branch] tianren/customop_collectiveop_benchmark -> origin/tianren/customop_collectiveop_benchmark 2025-12-04T10:14:41.2478006Z * [new branch] tianren/customop_collectiveop_benchmark_fix -> origin/tianren/customop_collectiveop_benchmark_fix 2025-12-04T10:14:41.2478107Z * [new branch] tianren/customop_dynamic_config -> origin/tianren/customop_dynamic_config 2025-12-04T10:14:41.2478198Z * [new branch] tianren/dynamic_range_input -> origin/tianren/dynamic_range_input 2025-12-04T10:14:41.2478298Z * [new branch] tianren/dynamic_range_input_fix -> origin/tianren/dynamic_range_input_fix 2025-12-04T10:14:41.2478401Z * [new branch] tianren/dynamic_range_input_merge -> origin/tianren/dynamic_range_input_merge 2025-12-04T10:14:41.2478502Z * [new branch] tianren/flex_paged_attn_fix_temp -> origin/tianren/flex_paged_attn_fix_temp 2025-12-04T10:14:41.2478608Z * [new branch] tianren/fx_codegen_dump -> origin/tianren/fx_codegen_dump 2025-12-04T10:14:41.2478691Z * [new branch] tianren/symmetric_memory -> origin/tianren/symmetric_memory 2025-12-04T10:14:41.2478755Z * [new branch] tianren/test -> origin/tianren/test 2025-12-04T10:14:41.2478833Z * [new branch] tidy_performance_cyy -> origin/tidy_performance_cyy 2025-12-04T10:14:41.2478891Z * [new branch] tmp -> origin/tmp 2025-12-04T10:14:41.2478956Z * [new branch] torchtitan_ep -> origin/torchtitan_ep 2025-12-04T10:14:41.2479033Z * [new branch] torchtitan_integration -> origin/torchtitan_integration 2025-12-04T10:14:41.2479114Z * [new branch] trace_fsdp_torchtune_lora -> origin/trace_fsdp_torchtune_lora 2025-12-04T10:14:41.2479199Z * [new branch] traceable_fsdp_unit_tests -> origin/traceable_fsdp_unit_tests 2025-12-04T10:14:41.2479268Z * [new branch] tree_loop_vec_base -> origin/tree_loop_vec_base 2025-12-04T10:14:41.2479331Z * [new branch] triton_kernel -> origin/triton_kernel 2025-12-04T10:14:41.2479394Z * [new branch] tt_pkg_1908 -> origin/tt_pkg_1908 2025-12-04T10:14:41.2479482Z * [new branch] type_dec -> origin/type_dec 2025-12-04T10:14:41.2479573Z * [new branch] udate-sphinx-dependancies -> origin/udate-sphinx-dependancies 2025-12-04T10:14:41.2479713Z * [new branch] update-audio-commit-hash/17630256502-1803-1 -> origin/update-audio-commit-hash/17630256502-1803-1 2025-12-04T10:14:41.2479844Z * [new branch] update-audio-commit-hash/19087141161-1916-1 -> origin/update-audio-commit-hash/19087141161-1916-1 2025-12-04T10:14:41.2479973Z * [new branch] update-audio-commit-hash/19250643381-1929-1 -> origin/update-audio-commit-hash/19250643381-1929-1 2025-12-04T10:14:41.2480104Z * [new branch] update-audio-commit-hash/19397724337-1935-1 -> origin/update-audio-commit-hash/19397724337-1935-1 2025-12-04T10:14:41.2480230Z * [new branch] update-audio-commit-hash/19555670148-1941-1 -> origin/update-audio-commit-hash/19555670148-1941-1 2025-12-04T10:14:41.2480357Z * [new branch] update-audio-commit-hash/19750627930-1946-1 -> origin/update-audio-commit-hash/19750627930-1946-1 2025-12-04T10:14:41.2480496Z * [new branch] update-triton-commit-hash/13663274526-1487-2 -> origin/update-triton-commit-hash/13663274526-1487-2 2025-12-04T10:14:41.2480672Z * [new branch] update-vision-commit-hash/19087141161-1916-1 -> origin/update-vision-commit-hash/19087141161-1916-1 2025-12-04T10:14:41.2480806Z * [new branch] update-vision-commit-hash/19184897099-1925-1 -> origin/update-vision-commit-hash/19184897099-1925-1 2025-12-04T10:14:41.2480938Z * [new branch] update-vision-commit-hash/19250643381-1929-1 -> origin/update-vision-commit-hash/19250643381-1929-1 2025-12-04T10:14:41.2481066Z * [new branch] update-vision-commit-hash/19381328640-1934-1 -> origin/update-vision-commit-hash/19381328640-1934-1 2025-12-04T10:14:41.2481198Z * [new branch] update-vision-commit-hash/19485237164-1938-1 -> origin/update-vision-commit-hash/19485237164-1938-1 2025-12-04T10:14:41.2481327Z * [new branch] update-vllm-commit-hash/18451675449-1879-1 -> origin/update-vllm-commit-hash/18451675449-1879-1 2025-12-04T10:14:41.2481413Z * [new branch] update-vllm-dockerfile -> origin/update-vllm-dockerfile 2025-12-04T10:14:41.2481541Z * [new branch] update-xla-commit-hash/19224287370-211-1 -> origin/update-xla-commit-hash/19224287370-211-1 2025-12-04T10:14:41.2481661Z * [new branch] update-xla-commit-hash/19422028566-212-1 -> origin/update-xla-commit-hash/19422028566-212-1 2025-12-04T10:14:41.2481828Z * [new branch] update-xla-commit-hash/19626841311-213-1 -> origin/update-xla-commit-hash/19626841311-213-1 2025-12-04T10:14:41.2481953Z * [new branch] update_docs_torch_multinomial_issue#125388 -> origin/update_docs_torch_multinomial_issue#125388 2025-12-04T10:14:41.2482032Z * [new branch] update_operator_readme -> origin/update_operator_readme 2025-12-04T10:14:41.2482122Z * [new branch] update_slow_tests_1722488736 -> origin/update_slow_tests_1722488736 2025-12-04T10:14:41.2482207Z * [new branch] update_slow_tests_1722879173 -> origin/update_slow_tests_1722879173 2025-12-04T10:14:41.2482292Z * [new branch] update_slow_tests_1762155677 -> origin/update_slow_tests_1762155677 2025-12-04T10:14:41.2482377Z * [new branch] update_slow_tests_1763365283 -> origin/update_slow_tests_1763365283 2025-12-04T10:14:41.2482462Z * [new branch] update_submodule_FBGEMM -> origin/update_submodule_FBGEMM 2025-12-04T10:14:41.2482540Z * [new branch] update_submodule_kineto -> origin/update_submodule_kineto 2025-12-04T10:14:41.2482631Z * [new branch] update_submodule_tensorpipe -> origin/update_submodule_tensorpipe 2025-12-04T10:14:41.2482729Z * [new branch] upload-tests-for-autorevert -> origin/upload-tests-for-autorevert 2025-12-04T10:14:41.2482829Z * [new branch] v0.1.2 -> origin/v0.1.2 2025-12-04T10:14:41.2482891Z * [new branch] v1.0.1 -> origin/v1.0.1 2025-12-04T10:14:41.2482949Z * [new branch] v1.0.3 -> origin/v1.0.3 2025-12-04T10:14:41.2483007Z * [new branch] v1.1.0 -> origin/v1.1.0 2025-12-04T10:14:41.2483063Z * [new branch] v1.2.0 -> origin/v1.2.0 2025-12-04T10:14:41.2483119Z * [new branch] v1.3.0 -> origin/v1.3.0 2025-12-04T10:14:41.2483177Z * [new branch] v1.3.1 -> origin/v1.3.1 2025-12-04T10:14:41.2483242Z * [new branch] validate_fn -> origin/validate_fn 2025-12-04T10:14:41.2483309Z * [new branch] validations_2.6 -> origin/validations_2.6 2025-12-04T10:14:41.2483378Z * [new branch] validations_2.8 -> origin/validations_2.8 2025-12-04T10:14:41.2483444Z * [new branch] varlen-api -> origin/varlen-api 2025-12-04T10:14:41.2483520Z * [new branch] varlen-api-backup -> origin/varlen-api-backup 2025-12-04T10:14:41.2483597Z * [new branch] varlen_batch_invariance -> origin/varlen_batch_invariance 2025-12-04T10:14:41.2483661Z * [new branch] viable/strict -> origin/viable/strict 2025-12-04T10:14:41.2483778Z * [new branch] vishal9-team/dtensor_parallelism_toy -> origin/vishal9-team/dtensor_parallelism_toy 2025-12-04T10:14:41.2483845Z * [new branch] vllmbuildci -> origin/vllmbuildci 2025-12-04T10:14:41.2483906Z * [new branch] vllmpin -> origin/vllmpin 2025-12-04T10:14:41.2483996Z * [new branch] vscode-recommend-pyrefly -> origin/vscode-recommend-pyrefly 2025-12-04T10:14:41.2484064Z * [new branch] wdvr-patch-1 -> origin/wdvr-patch-1 2025-12-04T10:14:41.2484128Z * [new branch] wdvr/iss_145259 -> origin/wdvr/iss_145259 2025-12-04T10:14:41.2484188Z * [new branch] whc/pei -> origin/whc/pei 2025-12-04T10:14:41.2484254Z * [new branch] whc/pp_fix -> origin/whc/pp_fix 2025-12-04T10:14:41.2484316Z * [new branch] whc/sharding -> origin/whc/sharding 2025-12-04T10:14:41.2484380Z * [new branch] whc/sharding2 -> origin/whc/sharding2 2025-12-04T10:14:41.2484468Z * [new branch] whc/uneven -> origin/whc/uneven 2025-12-04T10:14:41.2484538Z * [new branch] whc/uneven-merge -> origin/whc/uneven-merge 2025-12-04T10:14:41.2484601Z * [new branch] win_warnings -> origin/win_warnings 2025-12-04T10:14:41.2484676Z * [new branch] windows_libtorch_free -> origin/windows_libtorch_free 2025-12-04T10:14:41.2484739Z * [new branch] xmfan-war -> origin/xmfan-war 2025-12-04T10:14:41.2484801Z * [new branch] xmfan/ca_0516 -> origin/xmfan/ca_0516 2025-12-04T10:14:41.2484870Z * [new branch] xmfan/ca_1051b93192 -> origin/xmfan/ca_1051b93192 2025-12-04T10:14:41.2485018Z * [new branch] xmfan/ca_1a722f62c248391fc4a542e8851a5559aa356ae8 -> origin/xmfan/ca_1a722f62c248391fc4a542e8851a5559aa356ae8 2025-12-04T10:14:41.2485090Z * [new branch] xmfan/ca_5a2be192d1 -> origin/xmfan/ca_5a2be192d1 2025-12-04T10:14:41.2485160Z * [new branch] xmfan/ca_9d59b516e9 -> origin/xmfan/ca_9d59b516e9 2025-12-04T10:14:41.2485224Z * [new branch] xmfan/ca_apr8 -> origin/xmfan/ca_apr8 2025-12-04T10:14:41.2485289Z * [new branch] xmfan/ca_base -> origin/xmfan/ca_base 2025-12-04T10:14:41.2485355Z * [new branch] xmfan/ca_dynamic -> origin/xmfan/ca_dynamic 2025-12-04T10:14:41.2485455Z * [new branch] xmfan/ca_fix_dyn -> origin/xmfan/ca_fix_dyn 2025-12-04T10:14:41.2485529Z * [new branch] xmfan/ca_fix_lowering -> origin/xmfan/ca_fix_lowering 2025-12-04T10:14:41.2485603Z * [new branch] xmfan/ca_fix_polyfills -> origin/xmfan/ca_fix_polyfills 2025-12-04T10:14:41.2485665Z * [new branch] xmfan/ca_jan3 -> origin/xmfan/ca_jan3 2025-12-04T10:14:41.2485729Z * [new branch] xmfan/ca_jun18 -> origin/xmfan/ca_jun18 2025-12-04T10:14:41.2485794Z * [new branch] xmfan/ca_jun24 -> origin/xmfan/ca_jun24 2025-12-04T10:14:41.2485858Z * [new branch] xmfan/ca_nested -> origin/xmfan/ca_nested 2025-12-04T10:14:41.2485926Z * [new branch] xmfan/ca_overhead -> origin/xmfan/ca_overhead 2025-12-04T10:14:41.2486016Z * [new branch] xmfan/ca_overhead_0eba7e5451 -> origin/xmfan/ca_overhead_0eba7e5451 2025-12-04T10:14:41.2486083Z * [new branch] xmfan/cacu_jun18 -> origin/xmfan/cacu_jun18 2025-12-04T10:14:41.2486150Z * [new branch] xmfan/cacu_jun19 -> origin/xmfan/cacu_jun19 2025-12-04T10:14:41.2486217Z * [new branch] xmfan/cacu_jun4 -> origin/xmfan/cacu_jun4 2025-12-04T10:14:41.2486299Z * [new branch] xmfan/disable_duck_shape -> origin/xmfan/disable_duck_shape 2025-12-04T10:14:41.2486398Z * [new branch] xmfan/fca_cpp_node_passthrough -> origin/xmfan/fca_cpp_node_passthrough 2025-12-04T10:14:41.2486549Z * [new branch] xmfan/post_3945954741e2d37023c5d6954f9483008e0892f9 -> origin/xmfan/post_3945954741e2d37023c5d6954f9483008e0892f9 2025-12-04T10:14:41.2486693Z * [new branch] xmfan/pre_3945954741e2d37023c5d6954f9483008e0892f9 -> origin/xmfan/pre_3945954741e2d37023c5d6954f9483008e0892f9 2025-12-04T10:14:41.2486763Z * [new branch] xmfan/single_step -> origin/xmfan/single_step 2025-12-04T10:14:41.2486828Z * [new branch] xmfan/sth_0829 -> origin/xmfan/sth_0829 2025-12-04T10:14:41.2486890Z * [new branch] xmfan/test -> origin/xmfan/test 2025-12-04T10:14:41.2486976Z * [new branch] yguo/debug-0226-constexpr -> origin/yguo/debug-0226-constexpr 2025-12-04T10:14:41.2487053Z * [new branch] yguo/new_latest_changes -> origin/yguo/new_latest_changes 2025-12-04T10:14:41.2487149Z * [new branch] yguo/patch_constexpr_changes -> origin/yguo/patch_constexpr_changes 2025-12-04T10:14:41.2487244Z * [new branch] yiming/bootcamp -> origin/yiming/bootcamp 2025-12-04T10:14:41.2487344Z * [new branch] yiming/run_with_start_end_rng_hop -> origin/yiming/run_with_start_end_rng_hop 2025-12-04T10:14:41.2487409Z * [new branch] yolo-llama3 -> origin/yolo-llama3 2025-12-04T10:14:41.2487481Z * [new branch] zainr/canary-test -> origin/zainr/canary-test 2025-12-04T10:14:41.2487567Z * [new branch] zainr/cleanup-gh-runners -> origin/zainr/cleanup-gh-runners 2025-12-04T10:14:41.2487648Z * [new branch] zainr/pull-migration-c -> origin/zainr/pull-migration-c 2025-12-04T10:14:41.2487710Z * [new branch] zainr/test2 -> origin/zainr/test2 2025-12-04T10:14:41.2487782Z * [new branch] zasdfgbnm-patch-3 -> origin/zasdfgbnm-patch-3 2025-12-04T10:14:41.2487843Z * [new branch] zb2p -> origin/zb2p 2025-12-04T10:14:41.2487928Z * [new branch] zeros-and-scatter-part2 -> origin/zeros-and-scatter-part2 2025-12-04T10:14:41.2488014Z * [new branch] zhxchen17/ci/vllm_lora_oom -> origin/zhxchen17/ci/vllm_lora_oom 2025-12-04T10:14:41.2488117Z * [new branch] zhxchen17/ci/vllm_multimodal_oom -> origin/zhxchen17/ci/vllm_multimodal_oom 2025-12-04T10:14:41.2488217Z * [new branch] zhxchen17/ci/vllm_pin -> origin/zhxchen17/ci/vllm_pin 2025-12-04T10:14:41.2488339Z * [new branch] zhxchen17/dynamo/unsafe_drop_all_guards -> origin/zhxchen17/dynamo/unsafe_drop_all_guards 2025-12-04T10:14:41.2488436Z * [new branch] zhxchen17/export/call_override -> origin/zhxchen17/export/call_override 2025-12-04T10:14:41.2488521Z * [new branch] zhxchen17/export/codemod1 -> origin/zhxchen17/export/codemod1 2025-12-04T10:14:41.2488610Z * [new branch] zhxchen17/export/ctx_return -> origin/zhxchen17/export/ctx_return 2025-12-04T10:14:41.2488739Z * [new branch] zhxchen17/export/disable_side_effect_warn -> origin/zhxchen17/export/disable_side_effect_warn 2025-12-04T10:14:41.2488836Z * [new branch] zhxchen17/export/pytree_check -> origin/zhxchen17/export/pytree_check 2025-12-04T10:14:41.2488925Z * [new branch] zhxchen17/precompile/aoti -> origin/zhxchen17/precompile/aoti 2025-12-04T10:14:41.2489022Z * [new branch] zhxchen17/precompile/globals -> origin/zhxchen17/precompile/globals 2025-12-04T10:14:41.2489138Z * [new branch] zhxchen17/precompile/inductor_guards -> origin/zhxchen17/precompile/inductor_guards 2025-12-04T10:14:41.2489212Z * [new branch] zhxchen17/scratch/0 -> origin/zhxchen17/scratch/0 2025-12-04T10:14:41.2489316Z * [new branch] zhxchen17/torch_export_api_update -> origin/zhxchen17/torch_export_api_update 2025-12-04T10:14:41.2489392Z * [new branch] zhxhcen17/moodycamel -> origin/zhxhcen17/moodycamel 2025-12-04T10:14:41.2489468Z * [new branch] zxiiro/build-times -> origin/zxiiro/build-times 2025-12-04T10:14:41.2489539Z * [new branch] zxiiro/c7i.2xlarge -> origin/zxiiro/c7i.2xlarge 2025-12-04T10:14:41.2489619Z * [new branch] zxiiro/c7i.2xlarge.h100 -> origin/zxiiro/c7i.2xlarge.h100 2025-12-04T10:14:41.2489682Z * [new branch] zxiiro/main -> origin/zxiiro/main 2025-12-04T10:14:41.2489747Z * [new branch] zxiiro/risc64 -> origin/zxiiro/risc64 2025-12-04T10:14:41.2489837Z * [new branch] zxiiro/test-multicloud-arc -> origin/zxiiro/test-multicloud-arc 2025-12-04T10:14:41.2489906Z t [tag update] ciflow/inductor/169437 -> ciflow/inductor/169437 2025-12-04T10:14:41.2489970Z t [tag update] ciflow/trunk/169437 -> ciflow/trunk/169437 2025-12-04T10:14:41.2490106Z * [new tag] trunk/c0cb6e78404416d418350632bfc554710a5f7281 -> trunk/c0cb6e78404416d418350632bfc554710a5f7281 2025-12-04T10:14:41.4554825Z [command]/usr/bin/git rev-parse --verify --quiet ffd9b0fb4355e97af82fc42cf185c3ffa0fc0a32^{object} 2025-12-04T10:14:41.4786312Z ffd9b0fb4355e97af82fc42cf185c3ffa0fc0a32 2025-12-04T10:14:41.4793310Z ##[endgroup] 2025-12-04T10:14:41.4793868Z ##[group]Determining the checkout info 2025-12-04T10:14:41.4794919Z ##[endgroup] 2025-12-04T10:14:41.4802824Z [command]/usr/bin/git sparse-checkout disable 2025-12-04T10:14:41.4912345Z [command]/usr/bin/git config --local --unset-all extensions.worktreeConfig 2025-12-04T10:14:41.4951767Z ##[group]Checking out the ref 2025-12-04T10:14:41.4956988Z [command]/usr/bin/git checkout --progress --force ffd9b0fb4355e97af82fc42cf185c3ffa0fc0a32 2025-12-04T10:14:41.5884489Z Previous HEAD position was c0cb6e784044 [DTensor] ExplicitRedistributionContext warning mode (#169452) 2025-12-04T10:14:41.5889546Z HEAD is now at ffd9b0fb4355 Resolve collective autotuning test failure on arm (#168919) 2025-12-04T10:14:41.5995116Z ##[endgroup] 2025-12-04T10:14:41.5995718Z ##[group]Setting up auth for fetching submodules 2025-12-04T10:14:41.6006737Z [command]/usr/bin/git config --global http.https://github.com/.extraheader AUTHORIZATION: basic *** 2025-12-04T10:14:41.6063202Z [command]/usr/bin/git config --global --unset-all url.https://github.com/.insteadOf 2025-12-04T10:14:41.6103134Z [command]/usr/bin/git config --global --add url.https://github.com/.insteadOf git@github.com: 2025-12-04T10:14:41.6145846Z [command]/usr/bin/git config --global --add url.https://github.com/.insteadOf org-21003710@github.com: 2025-12-04T10:14:41.6187454Z ##[endgroup] 2025-12-04T10:14:41.6187954Z ##[group]Fetching submodules 2025-12-04T10:14:41.6193256Z [command]/usr/bin/git submodule sync --recursive 2025-12-04T10:14:41.6498336Z Synchronizing submodule url for 'android/libs/fbjni' 2025-12-04T10:14:41.6508813Z Synchronizing submodule url for 'third_party/FP16' 2025-12-04T10:14:41.6518745Z Synchronizing submodule url for 'third_party/FXdiv' 2025-12-04T10:14:41.6535958Z Synchronizing submodule url for 'third_party/NNPACK' 2025-12-04T10:14:41.6548461Z Synchronizing submodule url for 'third_party/NVTX' 2025-12-04T10:14:41.6560865Z Synchronizing submodule url for 'third_party/VulkanMemoryAllocator' 2025-12-04T10:14:41.6573230Z Synchronizing submodule url for 'third_party/XNNPACK' 2025-12-04T10:14:41.6602205Z Synchronizing submodule url for 'third_party/aiter' 2025-12-04T10:14:41.6618415Z Synchronizing submodule url for 'third_party/aiter/3rdparty/composable_kernel' 2025-12-04T10:14:41.6653910Z Synchronizing submodule url for 'third_party/benchmark' 2025-12-04T10:14:41.6665576Z Synchronizing submodule url for 'third_party/composable_kernel' 2025-12-04T10:14:41.6680328Z Synchronizing submodule url for 'third_party/cpp-httplib' 2025-12-04T10:14:41.6702389Z Synchronizing submodule url for 'third_party/cpuinfo' 2025-12-04T10:14:41.6713787Z Synchronizing submodule url for 'third_party/cudnn_frontend' 2025-12-04T10:14:41.6735579Z Synchronizing submodule url for 'third_party/cutlass' 2025-12-04T10:14:41.6750137Z Synchronizing submodule url for 'third_party/fbgemm' 2025-12-04T10:14:41.6764012Z Synchronizing submodule url for 'third_party/fbgemm/external/asmjit' 2025-12-04T10:14:41.6786025Z Synchronizing submodule url for 'third_party/fbgemm/external/composable_kernel' 2025-12-04T10:14:41.6799867Z Synchronizing submodule url for 'third_party/fbgemm/external/cpuinfo' 2025-12-04T10:14:41.6816609Z Synchronizing submodule url for 'third_party/fbgemm/external/cutlass' 2025-12-04T10:14:41.6846141Z Synchronizing submodule url for 'third_party/fbgemm/external/googletest' 2025-12-04T10:14:41.6867337Z Synchronizing submodule url for 'third_party/fbgemm/external/hipify_torch' 2025-12-04T10:14:41.6888311Z Synchronizing submodule url for 'third_party/fbgemm/external/json' 2025-12-04T10:14:41.6914332Z Synchronizing submodule url for 'third_party/flash-attention' 2025-12-04T10:14:41.6930929Z Synchronizing submodule url for 'third_party/flash-attention/csrc/composable_kernel' 2025-12-04T10:14:41.6956565Z Synchronizing submodule url for 'third_party/flash-attention/csrc/cutlass' 2025-12-04T10:14:41.6971611Z Synchronizing submodule url for 'third_party/flatbuffers' 2025-12-04T10:14:41.6984073Z Synchronizing submodule url for 'third_party/fmt' 2025-12-04T10:14:41.6996841Z Synchronizing submodule url for 'third_party/gemmlowp/gemmlowp' 2025-12-04T10:14:41.7007637Z Synchronizing submodule url for 'third_party/gloo' 2025-12-04T10:14:41.7017219Z Synchronizing submodule url for 'third_party/googletest' 2025-12-04T10:14:41.7026692Z Synchronizing submodule url for 'third_party/ideep' 2025-12-04T10:14:41.7039107Z Synchronizing submodule url for 'third_party/ideep/mkl-dnn' 2025-12-04T10:14:41.7065595Z Synchronizing submodule url for 'third_party/ittapi' 2025-12-04T10:14:41.7076487Z Synchronizing submodule url for 'third_party/kineto' 2025-12-04T10:14:41.7089320Z Synchronizing submodule url for 'third_party/kineto/libkineto/third_party/dynolog' 2025-12-04T10:14:41.7110366Z Synchronizing submodule url for 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-12-04T10:14:41.7122303Z Synchronizing submodule url for 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-12-04T10:14:41.7133065Z Synchronizing submodule url for 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-12-04T10:14:41.7142463Z Synchronizing submodule url for 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-12-04T10:14:41.7168135Z Synchronizing submodule url for 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-12-04T10:14:41.7181675Z Synchronizing submodule url for 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-12-04T10:14:41.7202459Z Synchronizing submodule url for 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-12-04T10:14:41.7212612Z Synchronizing submodule url for 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-12-04T10:14:41.7221510Z Synchronizing submodule url for 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-12-04T10:14:41.7230275Z Synchronizing submodule url for 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp' 2025-12-04T10:14:41.7243737Z Synchronizing submodule url for 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:41.7266543Z Synchronizing submodule url for 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:41.7293206Z Synchronizing submodule url for 'third_party/kineto/libkineto/third_party/fmt' 2025-12-04T10:14:41.7303095Z Synchronizing submodule url for 'third_party/kineto/libkineto/third_party/googletest' 2025-12-04T10:14:41.7328594Z Synchronizing submodule url for 'third_party/kleidiai' 2025-12-04T10:14:41.7351530Z Synchronizing submodule url for 'third_party/mimalloc' 2025-12-04T10:14:41.7362602Z Synchronizing submodule url for 'third_party/nlohmann' 2025-12-04T10:14:41.7374047Z Synchronizing submodule url for 'third_party/onnx' 2025-12-04T10:14:41.7392011Z Synchronizing submodule url for 'third_party/onnx/third_party/pybind11' 2025-12-04T10:14:41.7405092Z Synchronizing submodule url for 'third_party/opentelemetry-cpp' 2025-12-04T10:14:41.7419091Z Synchronizing submodule url for 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-12-04T10:14:41.7429292Z Synchronizing submodule url for 'third_party/opentelemetry-cpp/third_party/googletest' 2025-12-04T10:14:41.7440153Z Synchronizing submodule url for 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-12-04T10:14:41.7449208Z Synchronizing submodule url for 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-12-04T10:14:41.7458751Z Synchronizing submodule url for 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-12-04T10:14:41.7478912Z Synchronizing submodule url for 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-12-04T10:14:41.7489049Z Synchronizing submodule url for 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-12-04T10:14:41.7500590Z Synchronizing submodule url for 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:41.7522595Z Synchronizing submodule url for 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:41.7534133Z Synchronizing submodule url for 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-12-04T10:14:41.7553294Z Synchronizing submodule url for 'third_party/pocketfft' 2025-12-04T10:14:41.7574625Z Synchronizing submodule url for 'third_party/protobuf' 2025-12-04T10:14:41.7605916Z Synchronizing submodule url for 'third_party/protobuf/third_party/benchmark' 2025-12-04T10:14:41.7629197Z Synchronizing submodule url for 'third_party/protobuf/third_party/googletest' 2025-12-04T10:14:41.7662603Z Synchronizing submodule url for 'third_party/psimd' 2025-12-04T10:14:41.7674325Z Synchronizing submodule url for 'third_party/pthreadpool' 2025-12-04T10:14:41.7696212Z Synchronizing submodule url for 'third_party/pybind11' 2025-12-04T10:14:41.7712015Z Synchronizing submodule url for 'third_party/python-peachpy' 2025-12-04T10:14:41.7738588Z Synchronizing submodule url for 'third_party/sleef' 2025-12-04T10:14:41.7766255Z Synchronizing submodule url for 'third_party/tensorpipe' 2025-12-04T10:14:41.7797315Z Synchronizing submodule url for 'third_party/tensorpipe/third_party/googletest' 2025-12-04T10:14:41.7822210Z Synchronizing submodule url for 'third_party/tensorpipe/third_party/libnop' 2025-12-04T10:14:41.7842878Z Synchronizing submodule url for 'third_party/tensorpipe/third_party/libuv' 2025-12-04T10:14:41.7866109Z Synchronizing submodule url for 'third_party/tensorpipe/third_party/pybind11' 2025-12-04T10:14:41.7891090Z Synchronizing submodule url for 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-12-04T10:14:41.7938038Z [command]/usr/bin/git -c protocol.version=2 submodule update --init --force --recursive 2025-12-04T10:14:41.8293128Z Submodule path 'android/libs/fbjni': checked out '7e1e1fe3858c63c251c637ae41a20de425dde96f' 2025-12-04T10:14:41.8359956Z Submodule path 'third_party/FP16': checked out '4dfe081cf6bcd15db339cf2680b9281b8451eeb3' 2025-12-04T10:14:41.8422203Z Submodule path 'third_party/FXdiv': checked out 'b408327ac2a15ec3e43352421954f5b1967701d1' 2025-12-04T10:14:41.8547308Z Submodule path 'third_party/NNPACK': checked out 'c07e3a0400713d546e0dea2d5466dd22ea389c73' 2025-12-04T10:14:41.8631667Z Submodule path 'third_party/NVTX': checked out '3ebbc93ded7285963bff932c678fa367eb393ba6' 2025-12-04T10:14:41.8731140Z Submodule path 'third_party/VulkanMemoryAllocator': checked out '1d8f600fd424278486eade7ed3e877c99f0846b1' 2025-12-04T10:14:42.3651883Z Submodule path 'third_party/XNNPACK': checked out '51a0103656eff6fc9bfd39a4597923c4b542c883' 2025-12-04T10:14:42.3886332Z Submodule path 'third_party/aiter': checked out '01aae101b9e5e94d6c16a9514c9fb8df99c93150' 2025-12-04T10:14:42.4111119Z Submodule path 'third_party/aiter/3rdparty/composable_kernel': checked out 'cffe8fa2a442ac8e80dd236a1a5d24fe3d7e0cbf' 2025-12-04T10:14:42.4275086Z Submodule path 'third_party/benchmark': checked out '299e5928955cc62af9968370293b916f5130916f' 2025-12-04T10:14:42.4500566Z Submodule path 'third_party/composable_kernel': checked out '7fe50dc3da2069d6645d9deb8c017a876472a977' 2025-12-04T10:14:42.4572419Z Submodule path 'third_party/cpp-httplib': checked out '89c932f313c6437c38f2982869beacc89c2f2246' 2025-12-04T10:14:42.5241670Z Submodule path 'third_party/cpuinfo': checked out 'f858c30bcb16f8effd5ff46996f0514539e17abc' 2025-12-04T10:14:42.5390680Z Submodule path 'third_party/cudnn_frontend': checked out '0b1577c8c83401237d601d0d0db5210506705396' 2025-12-04T10:14:42.5575074Z Submodule path 'third_party/cutlass': checked out 'f88806b1e31dfa579842638740216dd41fc6c588' 2025-12-04T10:14:42.6326642Z Submodule path 'third_party/fbgemm': checked out 'c0b988d39a9e47c794d699f29930ed4d7c7e13a4' 2025-12-04T10:14:42.6696012Z Submodule path 'third_party/fbgemm/external/asmjit': checked out 'a3199e8857792cd10b7589ff5d58343d2c9008ea' 2025-12-04T10:14:42.8430703Z Submodule path 'third_party/fbgemm/external/composable_kernel': checked out '7fe50dc3da2069d6645d9deb8c017a876472a977' 2025-12-04T10:14:42.9115763Z Submodule path 'third_party/fbgemm/external/cpuinfo': checked out '6543fec09b2f04ac4a666882998b534afc9c1349' 2025-12-04T10:14:43.2677059Z Submodule path 'third_party/fbgemm/external/cutlass': checked out '98125ce499b0fdf7ffbe0e3052f5b8709f4840f8' 2025-12-04T10:14:43.2899220Z Submodule path 'third_party/fbgemm/external/googletest': checked out '52eb8108c5bdec04579160ae17225d66034bd723' 2025-12-04T10:14:43.3019172Z Submodule path 'third_party/fbgemm/external/hipify_torch': checked out '63b6a7b541fa7f08f8475ca7d74054db36ff2691' 2025-12-04T10:14:43.3602972Z Submodule path 'third_party/fbgemm/external/json': checked out '9cca280a4d0ccf0c08f47a99aa71d1b0e52f8d03' 2025-12-04T10:14:43.3762366Z Submodule path 'third_party/flash-attention': checked out '979702c87a8713a8e0a5e9fee122b90d2ef13be5' 2025-12-04T10:14:43.4019054Z Submodule path 'third_party/flash-attention/csrc/composable_kernel': checked out '888317e698e9803c62bd38568abc9e05d7709f33' 2025-12-04T10:14:43.4155002Z Submodule path 'third_party/flash-attention/csrc/cutlass': checked out 'c506e16788cb08416a4a57e11a9067beeee29420' 2025-12-04T10:14:43.4300974Z Submodule path 'third_party/flatbuffers': checked out 'a2cd1ea3b6d3fee220106b5fed3f7ce8da9eb757' 2025-12-04T10:14:43.4486324Z Submodule path 'third_party/fmt': checked out '407c905e45ad75fc29bf0f9bb7c5c2fd3475976f' 2025-12-04T10:14:43.4714094Z Submodule path 'third_party/gemmlowp/gemmlowp': checked out '3fb5c176c17c765a3492cd2f0321b0dab712f350' 2025-12-04T10:14:43.4869802Z Submodule path 'third_party/gloo': checked out '54cbae0d3a67fa890b4c3d9ee162b7860315e341' 2025-12-04T10:14:43.5106120Z Submodule path 'third_party/googletest': checked out '52eb8108c5bdec04579160ae17225d66034bd723' 2025-12-04T10:14:43.5230516Z Submodule path 'third_party/ideep': checked out '719d8e6cd7f7a0e01b155657526d693acf97c2b3' 2025-12-04T10:14:43.8667970Z Submodule path 'third_party/ideep/mkl-dnn': checked out '8d263e693366ef8db40acc569cc7d8edf644556d' 2025-12-04T10:14:43.8813776Z Submodule path 'third_party/ittapi': checked out 'dec1d23ca65ab069d225dfe40dea14f455170959' 2025-12-04T10:14:43.8955796Z Submodule path 'third_party/kineto': checked out '31f85df8fbd89c188f14ef10f1ec65379786b943' 2025-12-04T10:14:43.9086356Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog': checked out 'd2ffe0a4e3acace628db49974246b66fc3e85fb1' 2025-12-04T10:14:43.9211631Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM': checked out 'ffde4e54bc7249a6039a5e6b45b395141e1217f9' 2025-12-04T10:14:43.9298387Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr': checked out '871ed52d350214a034f6ef8a3b8f51c5ce1bd400' 2025-12-04T10:14:43.9403158Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt': checked out 'cd4af11efc9c622896a3e4cb599fa28668ca3d05' 2025-12-04T10:14:43.9484860Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags': checked out 'e171aa2d15ed9eb17054558e0b3a6a413bb01067' 2025-12-04T10:14:43.9616927Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc': checked out '8411df715cf522606e3b1aca386ddfc0b63d34b4' 2025-12-04T10:14:43.9693514Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog': checked out 'b33e3bad4c46c8a6345525fd822af355e5ef9446' 2025-12-04T10:14:43.9789805Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest': checked out '52eb8108c5bdec04579160ae17225d66034bd723' 2025-12-04T10:14:43.9951614Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/json': checked out '4f8fba14066156b73f1189a2b8bd568bde5284c5' 2025-12-04T10:14:44.0050670Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs': checked out 'f68a2fa8ea36c783bdd760371411fcb495aa3150' 2025-12-04T10:14:44.0159177Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp': checked out 'b1234816facfdda29845c46696a02998a4af115a' 2025-12-04T10:14:44.0292494Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/civetweb': checked out 'd7ba35bbb649209c66e582d5a0244ba988a15159' 2025-12-04T10:14:44.0394725Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/googletest': checked out 'e2239ee6043f73722e7aa812a459f54a28552929' 2025-12-04T10:14:44.0481443Z Submodule path 'third_party/kineto/libkineto/third_party/fmt': checked out '40626af88bd7df9a5fb80be7b25ac85b122d6c21' 2025-12-04T10:14:44.0551356Z Submodule path 'third_party/kineto/libkineto/third_party/googletest': checked out '52eb8108c5bdec04579160ae17225d66034bd723' 2025-12-04T10:14:44.0663735Z Submodule path 'third_party/kleidiai': checked out 'd7770c89632329a9914ef1a90289917597639cbe' 2025-12-04T10:14:44.0763681Z Submodule path 'third_party/mimalloc': checked out 'fbd8b99c2b828428947d70fdc046bb55609be93e' 2025-12-04T10:14:44.0911285Z Submodule path 'third_party/nlohmann': checked out '55f93686c01528224f448c19128836e7df245f72' 2025-12-04T10:14:44.2762096Z Submodule path 'third_party/onnx': checked out 'e709452ef2bbc1d113faf678c24e6d3467696e83' 2025-12-04T10:14:44.3021327Z Submodule path 'third_party/onnx/third_party/pybind11': checked out 'a2e59f0e7065404b44dfe92a28aca47ba1378dc4' 2025-12-04T10:14:44.3147142Z Submodule path 'third_party/opentelemetry-cpp': checked out 'a799f4aed9c94b765dcdaabaeab7d5e7e2310878' 2025-12-04T10:14:44.3226198Z Submodule path 'third_party/opentelemetry-cpp/third_party/benchmark': checked out 'd572f4777349d43653b21d6c2fc63020ab326db2' 2025-12-04T10:14:44.3324041Z Submodule path 'third_party/opentelemetry-cpp/third_party/googletest': checked out 'b796f7d44681514f58a683a3a71ff17c94edb0c1' 2025-12-04T10:14:44.3422407Z Submodule path 'third_party/opentelemetry-cpp/third_party/ms-gsl': checked out '6f4529395c5b7c2d661812257cd6780c67e54afa' 2025-12-04T10:14:44.3527375Z Submodule path 'third_party/opentelemetry-cpp/third_party/nlohmann-json': checked out 'bc889afb4c5bf1c0d8ee29ef35eaaf4c8bef8a5d' 2025-12-04T10:14:44.3614121Z Submodule path 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto': checked out '4ca4f0335c63cda7ab31ea7ed70d6553aee14dce' 2025-12-04T10:14:44.3661911Z Submodule path 'third_party/opentelemetry-cpp/third_party/opentracing-cpp': checked out '06b57f48ded1fa3bdd3d4346f6ef29e40e08eaf5' 2025-12-04T10:14:44.3737540Z Submodule path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp': checked out 'c9ffcdda9086ffd9e1283ea7a0276d831f3c8a8d' 2025-12-04T10:14:44.3862170Z Submodule path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb': checked out 'eefb26f82b233268fc98577d265352720d477ba4' 2025-12-04T10:14:44.3931815Z Submodule path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest': checked out 'e2239ee6043f73722e7aa812a459f54a28552929' 2025-12-04T10:14:44.4131879Z Submodule path 'third_party/opentelemetry-cpp/tools/vcpkg': checked out '8eb57355a4ffb410a2e94c07b4dca2dffbee8e50' 2025-12-04T10:14:44.4199616Z Submodule path 'third_party/pocketfft': checked out '0fa0ef591e38c2758e3184c6c23e497b9f732ffa' 2025-12-04T10:14:44.5539569Z Submodule path 'third_party/protobuf': checked out 'd1eca4e4b421cd2997495c4b4e65cea6be4e9b8a' 2025-12-04T10:14:44.5683737Z Submodule path 'third_party/protobuf/third_party/benchmark': checked out '5b7683f49e1e9223cf9927b24f6fd3d6bd82e3f8' 2025-12-04T10:14:44.5924706Z Submodule path 'third_party/protobuf/third_party/googletest': checked out '5ec7f0c4a113e2f18ac2c6cc7df51ad6afc24081' 2025-12-04T10:14:44.5989903Z Submodule path 'third_party/psimd': checked out '072586a71b55b7f8c584153d223e95687148a900' 2025-12-04T10:14:44.6120768Z Submodule path 'third_party/pthreadpool': checked out '4fe0e1e183925bf8cfa6aae24237e724a96479b8' 2025-12-04T10:14:44.6345278Z Submodule path 'third_party/pybind11': checked out 'f5fbe867d2d26e4a0a9177a51f6e568868ad3dc8' 2025-12-04T10:14:44.6622443Z Submodule path 'third_party/python-peachpy': checked out 'f45429b087dd7d5bc78bb40dc7cf06425c252d67' 2025-12-04T10:14:44.6926397Z Submodule path 'third_party/sleef': checked out '5a1d179df9cf652951b59010a2d2075372d67f68' 2025-12-04T10:14:44.7097062Z Submodule path 'third_party/tensorpipe': checked out '2b4cd91092d335a697416b2a3cb398283246849d' 2025-12-04T10:14:44.7321141Z Submodule path 'third_party/tensorpipe/third_party/googletest': checked out 'aee0f9d9b5b87796ee8a0ab26b7587ec30e8858e' 2025-12-04T10:14:44.7450869Z Submodule path 'third_party/tensorpipe/third_party/libnop': checked out '910b55815be16109f04f4180e9adee14fb4ce281' 2025-12-04T10:14:44.7768235Z Submodule path 'third_party/tensorpipe/third_party/libuv': checked out '5152db2cbfeb5582e9c27c5ea1dba2cd9e10759b' 2025-12-04T10:14:44.7929204Z Submodule path 'third_party/tensorpipe/third_party/pybind11': checked out 'a23996fce38ff6ccfbcdc09f1e63f2c4be5ea2ef' 2025-12-04T10:14:44.8007366Z Submodule path 'third_party/tensorpipe/third_party/pybind11/tools/clang': checked out '6a00cbc4a9b8e68b71caf7f774b3f9c753ae84d5' 2025-12-04T10:14:44.8061649Z [command]/usr/bin/git submodule foreach --recursive git config --local gc.auto 0 2025-12-04T10:14:44.8338196Z Entering 'android/libs/fbjni' 2025-12-04T10:14:44.8395795Z Entering 'third_party/FP16' 2025-12-04T10:14:44.8430494Z Entering 'third_party/FXdiv' 2025-12-04T10:14:44.8465336Z Entering 'third_party/NNPACK' 2025-12-04T10:14:44.8488226Z Entering 'third_party/NVTX' 2025-12-04T10:14:44.8511479Z Entering 'third_party/VulkanMemoryAllocator' 2025-12-04T10:14:44.8533515Z Entering 'third_party/XNNPACK' 2025-12-04T10:14:44.8586402Z Entering 'third_party/aiter' 2025-12-04T10:14:44.8614246Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-12-04T10:14:44.8672957Z Entering 'third_party/benchmark' 2025-12-04T10:14:44.8717889Z Entering 'third_party/composable_kernel' 2025-12-04T10:14:44.8754009Z Entering 'third_party/cpp-httplib' 2025-12-04T10:14:44.8779266Z Entering 'third_party/cpuinfo' 2025-12-04T10:14:44.8809992Z Entering 'third_party/cudnn_frontend' 2025-12-04T10:14:44.8840205Z Entering 'third_party/cutlass' 2025-12-04T10:14:44.8880337Z Entering 'third_party/fbgemm' 2025-12-04T10:14:44.8903576Z Entering 'third_party/fbgemm/external/asmjit' 2025-12-04T10:14:44.8933221Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-12-04T10:14:44.8973607Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-12-04T10:14:44.9001828Z Entering 'third_party/fbgemm/external/cutlass' 2025-12-04T10:14:44.9044003Z Entering 'third_party/fbgemm/external/googletest' 2025-12-04T10:14:44.9063945Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-12-04T10:14:44.9092551Z Entering 'third_party/fbgemm/external/json' 2025-12-04T10:14:44.9133774Z Entering 'third_party/flash-attention' 2025-12-04T10:14:44.9165810Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-12-04T10:14:44.9194603Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-12-04T10:14:44.9254388Z Entering 'third_party/flatbuffers' 2025-12-04T10:14:44.9276873Z Entering 'third_party/fmt' 2025-12-04T10:14:44.9298054Z Entering 'third_party/gemmlowp/gemmlowp' 2025-12-04T10:14:44.9318996Z Entering 'third_party/gloo' 2025-12-04T10:14:44.9350258Z Entering 'third_party/googletest' 2025-12-04T10:14:44.9387671Z Entering 'third_party/ideep' 2025-12-04T10:14:44.9430863Z Entering 'third_party/ideep/mkl-dnn' 2025-12-04T10:14:44.9471334Z Entering 'third_party/ittapi' 2025-12-04T10:14:44.9515566Z Entering 'third_party/kineto' 2025-12-04T10:14:44.9550010Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-12-04T10:14:44.9594946Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-12-04T10:14:44.9633070Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-12-04T10:14:44.9670705Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-12-04T10:14:44.9710353Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-12-04T10:14:44.9754562Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-12-04T10:14:44.9799987Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-12-04T10:14:44.9833498Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-12-04T10:14:44.9883899Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-12-04T10:14:44.9908971Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-12-04T10:14:44.9945672Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp' 2025-12-04T10:14:44.9994689Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:45.0023739Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:45.0049534Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-12-04T10:14:45.0089202Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-12-04T10:14:45.0126738Z Entering 'third_party/kleidiai' 2025-12-04T10:14:45.0153284Z Entering 'third_party/mimalloc' 2025-12-04T10:14:45.0201544Z Entering 'third_party/nlohmann' 2025-12-04T10:14:45.0234027Z Entering 'third_party/onnx' 2025-12-04T10:14:45.0290832Z Entering 'third_party/onnx/third_party/pybind11' 2025-12-04T10:14:45.0324403Z Entering 'third_party/opentelemetry-cpp' 2025-12-04T10:14:45.0362915Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-12-04T10:14:45.0403833Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-12-04T10:14:45.0448058Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-12-04T10:14:45.0478413Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-12-04T10:14:45.0511407Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-12-04T10:14:45.0530936Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-12-04T10:14:45.0562345Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-12-04T10:14:45.0612492Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:45.0645890Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:45.0669210Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-12-04T10:14:45.0704245Z Entering 'third_party/pocketfft' 2025-12-04T10:14:45.0737887Z Entering 'third_party/protobuf' 2025-12-04T10:14:45.0783409Z Entering 'third_party/protobuf/third_party/benchmark' 2025-12-04T10:14:45.0807817Z Entering 'third_party/protobuf/third_party/googletest' 2025-12-04T10:14:45.0831242Z Entering 'third_party/psimd' 2025-12-04T10:14:45.0883675Z Entering 'third_party/pthreadpool' 2025-12-04T10:14:45.0922062Z Entering 'third_party/pybind11' 2025-12-04T10:14:45.0952053Z Entering 'third_party/python-peachpy' 2025-12-04T10:14:45.0992357Z Entering 'third_party/sleef' 2025-12-04T10:14:45.1032846Z Entering 'third_party/tensorpipe' 2025-12-04T10:14:45.1060240Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-12-04T10:14:45.1079705Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-12-04T10:14:45.1099671Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-12-04T10:14:45.1132664Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-12-04T10:14:45.1174005Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-12-04T10:14:45.1211090Z ##[endgroup] 2025-12-04T10:14:45.1211699Z ##[group]Persisting credentials for submodules 2025-12-04T10:14:45.1221050Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'url\.https\:\/\/github\.com\/\.insteadOf' && git config --local --unset-all 'url.https://github.com/.insteadOf' || :" 2025-12-04T10:14:45.1428589Z Entering 'android/libs/fbjni' 2025-12-04T10:14:45.1467617Z Entering 'third_party/FP16' 2025-12-04T10:14:45.1515411Z Entering 'third_party/FXdiv' 2025-12-04T10:14:45.1551672Z Entering 'third_party/NNPACK' 2025-12-04T10:14:45.1573827Z Entering 'third_party/NVTX' 2025-12-04T10:14:45.1630394Z Entering 'third_party/VulkanMemoryAllocator' 2025-12-04T10:14:45.1670935Z Entering 'third_party/XNNPACK' 2025-12-04T10:14:45.1708553Z Entering 'third_party/aiter' 2025-12-04T10:14:45.1740178Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-12-04T10:14:45.1772163Z Entering 'third_party/benchmark' 2025-12-04T10:14:45.1804330Z Entering 'third_party/composable_kernel' 2025-12-04T10:14:45.1830274Z Entering 'third_party/cpp-httplib' 2025-12-04T10:14:45.1851486Z Entering 'third_party/cpuinfo' 2025-12-04T10:14:45.1872841Z Entering 'third_party/cudnn_frontend' 2025-12-04T10:14:45.1895633Z Entering 'third_party/cutlass' 2025-12-04T10:14:45.1923910Z Entering 'third_party/fbgemm' 2025-12-04T10:14:45.1972811Z Entering 'third_party/fbgemm/external/asmjit' 2025-12-04T10:14:45.2009106Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-12-04T10:14:45.2055493Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-12-04T10:14:45.2097182Z Entering 'third_party/fbgemm/external/cutlass' 2025-12-04T10:14:45.2151608Z Entering 'third_party/fbgemm/external/googletest' 2025-12-04T10:14:45.2196728Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-12-04T10:14:45.2231094Z Entering 'third_party/fbgemm/external/json' 2025-12-04T10:14:45.2257257Z Entering 'third_party/flash-attention' 2025-12-04T10:14:45.2318740Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-12-04T10:14:45.2349365Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-12-04T10:14:45.2386909Z Entering 'third_party/flatbuffers' 2025-12-04T10:14:45.2428615Z Entering 'third_party/fmt' 2025-12-04T10:14:45.2479697Z Entering 'third_party/gemmlowp/gemmlowp' 2025-12-04T10:14:45.2518723Z Entering 'third_party/gloo' 2025-12-04T10:14:45.2553532Z Entering 'third_party/googletest' 2025-12-04T10:14:45.2588482Z Entering 'third_party/ideep' 2025-12-04T10:14:45.2618774Z Entering 'third_party/ideep/mkl-dnn' 2025-12-04T10:14:45.2649666Z Entering 'third_party/ittapi' 2025-12-04T10:14:45.2685567Z Entering 'third_party/kineto' 2025-12-04T10:14:45.2708890Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-12-04T10:14:45.2756143Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-12-04T10:14:45.2789104Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-12-04T10:14:45.2810787Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-12-04T10:14:45.2853368Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-12-04T10:14:45.2901275Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-12-04T10:14:45.2937111Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-12-04T10:14:45.2985825Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-12-04T10:14:45.3019370Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-12-04T10:14:45.3041849Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-12-04T10:14:45.3064247Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp' 2025-12-04T10:14:45.3093055Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:45.3126902Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:45.3170125Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-12-04T10:14:45.3191769Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-12-04T10:14:45.3233401Z Entering 'third_party/kleidiai' 2025-12-04T10:14:45.3280586Z Entering 'third_party/mimalloc' 2025-12-04T10:14:45.3337024Z Entering 'third_party/nlohmann' 2025-12-04T10:14:45.3386173Z Entering 'third_party/onnx' 2025-12-04T10:14:45.3421464Z Entering 'third_party/onnx/third_party/pybind11' 2025-12-04T10:14:45.3458061Z Entering 'third_party/opentelemetry-cpp' 2025-12-04T10:14:45.3499113Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-12-04T10:14:45.3522492Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-12-04T10:14:45.3555042Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-12-04T10:14:45.3595835Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-12-04T10:14:45.3623981Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-12-04T10:14:45.3657011Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-12-04T10:14:45.3701449Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-12-04T10:14:45.3729154Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:45.3759043Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:45.3782039Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-12-04T10:14:45.3811214Z Entering 'third_party/pocketfft' 2025-12-04T10:14:45.3843690Z Entering 'third_party/protobuf' 2025-12-04T10:14:45.3881474Z Entering 'third_party/protobuf/third_party/benchmark' 2025-12-04T10:14:45.3921878Z Entering 'third_party/protobuf/third_party/googletest' 2025-12-04T10:14:45.3970534Z Entering 'third_party/psimd' 2025-12-04T10:14:45.4014642Z Entering 'third_party/pthreadpool' 2025-12-04T10:14:45.4048991Z Entering 'third_party/pybind11' 2025-12-04T10:14:45.4078123Z Entering 'third_party/python-peachpy' 2025-12-04T10:14:45.4101866Z Entering 'third_party/sleef' 2025-12-04T10:14:45.4145636Z Entering 'third_party/tensorpipe' 2025-12-04T10:14:45.4172514Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-12-04T10:14:45.4216558Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-12-04T10:14:45.4244847Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-12-04T10:14:45.4277015Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-12-04T10:14:45.4315516Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-12-04T10:14:45.4362087Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local 'http.https://github.com/.extraheader' 'AUTHORIZATION: basic ***' && git config --local --show-origin --name-only --get-regexp remote.origin.url" 2025-12-04T10:14:45.4645782Z Entering 'android/libs/fbjni' 2025-12-04T10:14:45.4688351Z file:/home/runner/_work/pytorch/pytorch/.git/modules/android/libs/fbjni/config remote.origin.url 2025-12-04T10:14:45.4711056Z Entering 'third_party/FP16' 2025-12-04T10:14:45.4750443Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FP16/config remote.origin.url 2025-12-04T10:14:45.4770680Z Entering 'third_party/FXdiv' 2025-12-04T10:14:45.4813882Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FXdiv/config remote.origin.url 2025-12-04T10:14:45.4825823Z Entering 'third_party/NNPACK' 2025-12-04T10:14:45.4858727Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK/config remote.origin.url 2025-12-04T10:14:45.4868322Z Entering 'third_party/NVTX' 2025-12-04T10:14:45.4897624Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NVTX/config remote.origin.url 2025-12-04T10:14:45.4906809Z Entering 'third_party/VulkanMemoryAllocator' 2025-12-04T10:14:45.4933338Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/VulkanMemoryAllocator/config remote.origin.url 2025-12-04T10:14:45.4955572Z Entering 'third_party/XNNPACK' 2025-12-04T10:14:45.4985086Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/XNNPACK/config remote.origin.url 2025-12-04T10:14:45.5005698Z Entering 'third_party/aiter' 2025-12-04T10:14:45.5027161Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/aiter/config remote.origin.url 2025-12-04T10:14:45.5049083Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-12-04T10:14:45.5097428Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/aiter/modules/3rdparty/composable_kernel/config remote.origin.url 2025-12-04T10:14:45.5114387Z Entering 'third_party/benchmark' 2025-12-04T10:14:45.5148325Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/benchmark/config remote.origin.url 2025-12-04T10:14:45.5169059Z Entering 'third_party/composable_kernel' 2025-12-04T10:14:45.5204308Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/composable_kernel/config remote.origin.url 2025-12-04T10:14:45.5234333Z Entering 'third_party/cpp-httplib' 2025-12-04T10:14:45.5265242Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/cpp-httplib/config remote.origin.url 2025-12-04T10:14:45.5286079Z Entering 'third_party/cpuinfo' 2025-12-04T10:14:45.5309368Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/cpuinfo/config remote.origin.url 2025-12-04T10:14:45.5320391Z Entering 'third_party/cudnn_frontend' 2025-12-04T10:14:45.5352332Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/cudnn_frontend/config remote.origin.url 2025-12-04T10:14:45.5363321Z Entering 'third_party/cutlass' 2025-12-04T10:14:45.5385462Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/cutlass/config remote.origin.url 2025-12-04T10:14:45.5400972Z Entering 'third_party/fbgemm' 2025-12-04T10:14:45.5439033Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/config remote.origin.url 2025-12-04T10:14:45.5449139Z Entering 'third_party/fbgemm/external/asmjit' 2025-12-04T10:14:45.5479929Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/asmjit/config remote.origin.url 2025-12-04T10:14:45.5489897Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-12-04T10:14:45.5522943Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/composable_kernel/config remote.origin.url 2025-12-04T10:14:45.5537161Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-12-04T10:14:45.5583718Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/cpuinfo/config remote.origin.url 2025-12-04T10:14:45.5594618Z Entering 'third_party/fbgemm/external/cutlass' 2025-12-04T10:14:45.5635455Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/cutlass/config remote.origin.url 2025-12-04T10:14:45.5650784Z Entering 'third_party/fbgemm/external/googletest' 2025-12-04T10:14:45.5702601Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/googletest/config remote.origin.url 2025-12-04T10:14:45.5713345Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-12-04T10:14:45.5754832Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/hipify_torch/config remote.origin.url 2025-12-04T10:14:45.5766794Z Entering 'third_party/fbgemm/external/json' 2025-12-04T10:14:45.5804086Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/json/config remote.origin.url 2025-12-04T10:14:45.5824429Z Entering 'third_party/flash-attention' 2025-12-04T10:14:45.5856733Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/config remote.origin.url 2025-12-04T10:14:45.5868696Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-12-04T10:14:45.5902009Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/modules/csrc/composable_kernel/config remote.origin.url 2025-12-04T10:14:45.5927128Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-12-04T10:14:45.5973245Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/modules/csrc/cutlass/config remote.origin.url 2025-12-04T10:14:45.6004833Z Entering 'third_party/flatbuffers' 2025-12-04T10:14:45.6048689Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/flatbuffers/config remote.origin.url 2025-12-04T10:14:45.6072864Z Entering 'third_party/fmt' 2025-12-04T10:14:45.6102632Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fmt/config remote.origin.url 2025-12-04T10:14:45.6124882Z Entering 'third_party/gemmlowp/gemmlowp' 2025-12-04T10:14:45.6157574Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/gemmlowp/gemmlowp/config remote.origin.url 2025-12-04T10:14:45.6169522Z Entering 'third_party/gloo' 2025-12-04T10:14:45.6201158Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/gloo/config remote.origin.url 2025-12-04T10:14:45.6221865Z Entering 'third_party/googletest' 2025-12-04T10:14:45.6262945Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/googletest/config remote.origin.url 2025-12-04T10:14:45.6275167Z Entering 'third_party/ideep' 2025-12-04T10:14:45.6304897Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/config remote.origin.url 2025-12-04T10:14:45.6315980Z Entering 'third_party/ideep/mkl-dnn' 2025-12-04T10:14:45.6354154Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/modules/mkl-dnn/config remote.origin.url 2025-12-04T10:14:45.6369166Z Entering 'third_party/ittapi' 2025-12-04T10:14:45.6389565Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/ittapi/config remote.origin.url 2025-12-04T10:14:45.6399409Z Entering 'third_party/kineto' 2025-12-04T10:14:45.6451213Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/config remote.origin.url 2025-12-04T10:14:45.6464141Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-12-04T10:14:45.6505597Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/config remote.origin.url 2025-12-04T10:14:45.6527978Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-12-04T10:14:45.6559855Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/DCGM/config remote.origin.url 2025-12-04T10:14:45.6571611Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-12-04T10:14:45.6595367Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/cpr/config remote.origin.url 2025-12-04T10:14:45.6613964Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-12-04T10:14:45.6634815Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/fmt/config remote.origin.url 2025-12-04T10:14:45.6645897Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-12-04T10:14:45.6666177Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/config remote.origin.url 2025-12-04T10:14:45.6686839Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-12-04T10:14:45.6707921Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/modules/doc/config remote.origin.url 2025-12-04T10:14:45.6719063Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-12-04T10:14:45.6743727Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/glog/config remote.origin.url 2025-12-04T10:14:45.6764875Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-12-04T10:14:45.6798766Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/googletest/config remote.origin.url 2025-12-04T10:14:45.6807997Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-12-04T10:14:45.6838976Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/json/config remote.origin.url 2025-12-04T10:14:45.6849127Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-12-04T10:14:45.6871714Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/pfs/config remote.origin.url 2025-12-04T10:14:45.6881102Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp' 2025-12-04T10:14:45.6910081Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/prometheus-cpp/config remote.origin.url 2025-12-04T10:14:45.6920789Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:45.6940419Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/prometheus-cpp/modules/civetweb/config remote.origin.url 2025-12-04T10:14:45.6963257Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:45.6998151Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/prometheus-cpp/modules/googletest/config remote.origin.url 2025-12-04T10:14:45.7014198Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-12-04T10:14:45.7053373Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/fmt/config remote.origin.url 2025-12-04T10:14:45.7074174Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-12-04T10:14:45.7118436Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/googletest/config remote.origin.url 2025-12-04T10:14:45.7131390Z Entering 'third_party/kleidiai' 2025-12-04T10:14:45.7152975Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kleidiai/config remote.origin.url 2025-12-04T10:14:45.7177598Z Entering 'third_party/mimalloc' 2025-12-04T10:14:45.7226526Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/mimalloc/config remote.origin.url 2025-12-04T10:14:45.7250496Z Entering 'third_party/nlohmann' 2025-12-04T10:14:45.7305852Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/nlohmann/config remote.origin.url 2025-12-04T10:14:45.7332137Z Entering 'third_party/onnx' 2025-12-04T10:14:45.7372276Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/config remote.origin.url 2025-12-04T10:14:45.7408493Z Entering 'third_party/onnx/third_party/pybind11' 2025-12-04T10:14:45.7449773Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/modules/third_party/pybind11/config remote.origin.url 2025-12-04T10:14:45.7466669Z Entering 'third_party/opentelemetry-cpp' 2025-12-04T10:14:45.7493833Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/config remote.origin.url 2025-12-04T10:14:45.7506533Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-12-04T10:14:45.7538952Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/benchmark/config remote.origin.url 2025-12-04T10:14:45.7547965Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-12-04T10:14:45.7586318Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/googletest/config remote.origin.url 2025-12-04T10:14:45.7597527Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-12-04T10:14:45.7617171Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/ms-gsl/config remote.origin.url 2025-12-04T10:14:45.7626667Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-12-04T10:14:45.7661667Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/nlohmann-json/config remote.origin.url 2025-12-04T10:14:45.7682625Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-12-04T10:14:45.7713632Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/opentelemetry-proto/config remote.origin.url 2025-12-04T10:14:45.7727651Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-12-04T10:14:45.7778865Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/opentracing-cpp/config remote.origin.url 2025-12-04T10:14:45.7789234Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-12-04T10:14:45.7816620Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/config remote.origin.url 2025-12-04T10:14:45.7837101Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:45.7859561Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/modules/civetweb/config remote.origin.url 2025-12-04T10:14:45.7868934Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:45.7890817Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/modules/googletest/config remote.origin.url 2025-12-04T10:14:45.7901430Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-12-04T10:14:45.7933534Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/tools/vcpkg/config remote.origin.url 2025-12-04T10:14:45.7968500Z Entering 'third_party/pocketfft' 2025-12-04T10:14:45.7993305Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/pocketfft/config remote.origin.url 2025-12-04T10:14:45.8003656Z Entering 'third_party/protobuf' 2025-12-04T10:14:45.8024916Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/config remote.origin.url 2025-12-04T10:14:45.8036204Z Entering 'third_party/protobuf/third_party/benchmark' 2025-12-04T10:14:45.8066193Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/benchmark/config remote.origin.url 2025-12-04T10:14:45.8076725Z Entering 'third_party/protobuf/third_party/googletest' 2025-12-04T10:14:45.8097881Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/googletest/config remote.origin.url 2025-12-04T10:14:45.8109070Z Entering 'third_party/psimd' 2025-12-04T10:14:45.8135601Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/psimd/config remote.origin.url 2025-12-04T10:14:45.8145871Z Entering 'third_party/pthreadpool' 2025-12-04T10:14:45.8187489Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/pthreadpool/config remote.origin.url 2025-12-04T10:14:45.8199089Z Entering 'third_party/pybind11' 2025-12-04T10:14:45.8222605Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/pybind11/config remote.origin.url 2025-12-04T10:14:45.8233937Z Entering 'third_party/python-peachpy' 2025-12-04T10:14:45.8254465Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/python-peachpy/config remote.origin.url 2025-12-04T10:14:45.8266003Z Entering 'third_party/sleef' 2025-12-04T10:14:45.8290990Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/sleef/config remote.origin.url 2025-12-04T10:14:45.8301512Z Entering 'third_party/tensorpipe' 2025-12-04T10:14:45.8331399Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/config remote.origin.url 2025-12-04T10:14:45.8345285Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-12-04T10:14:45.8364885Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/googletest/config remote.origin.url 2025-12-04T10:14:45.8384681Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-12-04T10:14:45.8419457Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libnop/config remote.origin.url 2025-12-04T10:14:45.8429680Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-12-04T10:14:45.8459823Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libuv/config remote.origin.url 2025-12-04T10:14:45.8480474Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-12-04T10:14:45.8500712Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/config remote.origin.url 2025-12-04T10:14:45.8521005Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-12-04T10:14:45.8543363Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/modules/tools/clang/config remote.origin.url 2025-12-04T10:14:45.8872180Z [command]/usr/bin/git submodule foreach --recursive git config --local --add 'url.https://github.com/.insteadOf' 'git@github.com:' 2025-12-04T10:14:45.9164895Z Entering 'android/libs/fbjni' 2025-12-04T10:14:45.9206234Z Entering 'third_party/FP16' 2025-12-04T10:14:45.9250970Z Entering 'third_party/FXdiv' 2025-12-04T10:14:45.9277428Z Entering 'third_party/NNPACK' 2025-12-04T10:14:45.9300537Z Entering 'third_party/NVTX' 2025-12-04T10:14:45.9326808Z Entering 'third_party/VulkanMemoryAllocator' 2025-12-04T10:14:45.9364151Z Entering 'third_party/XNNPACK' 2025-12-04T10:14:45.9401163Z Entering 'third_party/aiter' 2025-12-04T10:14:45.9425194Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-12-04T10:14:45.9469772Z Entering 'third_party/benchmark' 2025-12-04T10:14:45.9502939Z Entering 'third_party/composable_kernel' 2025-12-04T10:14:45.9537920Z Entering 'third_party/cpp-httplib' 2025-12-04T10:14:45.9570514Z Entering 'third_party/cpuinfo' 2025-12-04T10:14:45.9598156Z Entering 'third_party/cudnn_frontend' 2025-12-04T10:14:45.9629657Z Entering 'third_party/cutlass' 2025-12-04T10:14:45.9655695Z Entering 'third_party/fbgemm' 2025-12-04T10:14:45.9694441Z Entering 'third_party/fbgemm/external/asmjit' 2025-12-04T10:14:45.9737488Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-12-04T10:14:45.9782917Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-12-04T10:14:45.9812762Z Entering 'third_party/fbgemm/external/cutlass' 2025-12-04T10:14:45.9851455Z Entering 'third_party/fbgemm/external/googletest' 2025-12-04T10:14:45.9873216Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-12-04T10:14:45.9913292Z Entering 'third_party/fbgemm/external/json' 2025-12-04T10:14:45.9939819Z Entering 'third_party/flash-attention' 2025-12-04T10:14:45.9981681Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-12-04T10:14:46.0016474Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-12-04T10:14:46.0064753Z Entering 'third_party/flatbuffers' 2025-12-04T10:14:46.0112163Z Entering 'third_party/fmt' 2025-12-04T10:14:46.0142945Z Entering 'third_party/gemmlowp/gemmlowp' 2025-12-04T10:14:46.0177867Z Entering 'third_party/gloo' 2025-12-04T10:14:46.0222472Z Entering 'third_party/googletest' 2025-12-04T10:14:46.0258044Z Entering 'third_party/ideep' 2025-12-04T10:14:46.0285103Z Entering 'third_party/ideep/mkl-dnn' 2025-12-04T10:14:46.0312791Z Entering 'third_party/ittapi' 2025-12-04T10:14:46.0354983Z Entering 'third_party/kineto' 2025-12-04T10:14:46.0389293Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-12-04T10:14:46.0432578Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-12-04T10:14:46.0468214Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-12-04T10:14:46.0498821Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-12-04T10:14:46.0537454Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-12-04T10:14:46.0563418Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-12-04T10:14:46.0585752Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-12-04T10:14:46.0611617Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-12-04T10:14:46.0645489Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-12-04T10:14:46.0676067Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-12-04T10:14:46.0694629Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp' 2025-12-04T10:14:46.0729979Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:46.0772020Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:46.0806717Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-12-04T10:14:46.0844299Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-12-04T10:14:46.0884253Z Entering 'third_party/kleidiai' 2025-12-04T10:14:46.0929276Z Entering 'third_party/mimalloc' 2025-12-04T10:14:46.0960953Z Entering 'third_party/nlohmann' 2025-12-04T10:14:46.1004873Z Entering 'third_party/onnx' 2025-12-04T10:14:46.1050132Z Entering 'third_party/onnx/third_party/pybind11' 2025-12-04T10:14:46.1091862Z Entering 'third_party/opentelemetry-cpp' 2025-12-04T10:14:46.1123448Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-12-04T10:14:46.1150571Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-12-04T10:14:46.1171585Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-12-04T10:14:46.1213489Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-12-04T10:14:46.1250883Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-12-04T10:14:46.1273164Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-12-04T10:14:46.1302364Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-12-04T10:14:46.1338790Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:46.1361555Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:46.1401470Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-12-04T10:14:46.1431956Z Entering 'third_party/pocketfft' 2025-12-04T10:14:46.1478583Z Entering 'third_party/protobuf' 2025-12-04T10:14:46.1509723Z Entering 'third_party/protobuf/third_party/benchmark' 2025-12-04T10:14:46.1552232Z Entering 'third_party/protobuf/third_party/googletest' 2025-12-04T10:14:46.1583538Z Entering 'third_party/psimd' 2025-12-04T10:14:46.1617672Z Entering 'third_party/pthreadpool' 2025-12-04T10:14:46.1661049Z Entering 'third_party/pybind11' 2025-12-04T10:14:46.1684642Z Entering 'third_party/python-peachpy' 2025-12-04T10:14:46.1708849Z Entering 'third_party/sleef' 2025-12-04T10:14:46.1741041Z Entering 'third_party/tensorpipe' 2025-12-04T10:14:46.1778849Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-12-04T10:14:46.1819377Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-12-04T10:14:46.1856306Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-12-04T10:14:46.1886770Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-12-04T10:14:46.1913290Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-12-04T10:14:46.1959410Z [command]/usr/bin/git submodule foreach --recursive git config --local --add 'url.https://github.com/.insteadOf' 'org-21003710@github.com:' 2025-12-04T10:14:46.2241309Z Entering 'android/libs/fbjni' 2025-12-04T10:14:46.2267793Z Entering 'third_party/FP16' 2025-12-04T10:14:46.2289741Z Entering 'third_party/FXdiv' 2025-12-04T10:14:46.2310078Z Entering 'third_party/NNPACK' 2025-12-04T10:14:46.2338910Z Entering 'third_party/NVTX' 2025-12-04T10:14:46.2372114Z Entering 'third_party/VulkanMemoryAllocator' 2025-12-04T10:14:46.2397379Z Entering 'third_party/XNNPACK' 2025-12-04T10:14:46.2435979Z Entering 'third_party/aiter' 2025-12-04T10:14:46.2467632Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-12-04T10:14:46.2492011Z Entering 'third_party/benchmark' 2025-12-04T10:14:46.2537083Z Entering 'third_party/composable_kernel' 2025-12-04T10:14:46.2564334Z Entering 'third_party/cpp-httplib' 2025-12-04T10:14:46.2586725Z Entering 'third_party/cpuinfo' 2025-12-04T10:14:46.2610728Z Entering 'third_party/cudnn_frontend' 2025-12-04T10:14:46.2642346Z Entering 'third_party/cutlass' 2025-12-04T10:14:46.2682126Z Entering 'third_party/fbgemm' 2025-12-04T10:14:46.2724131Z Entering 'third_party/fbgemm/external/asmjit' 2025-12-04T10:14:46.2744227Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-12-04T10:14:46.2783974Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-12-04T10:14:46.2823264Z Entering 'third_party/fbgemm/external/cutlass' 2025-12-04T10:14:46.2848724Z Entering 'third_party/fbgemm/external/googletest' 2025-12-04T10:14:46.2880757Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-12-04T10:14:46.2920772Z Entering 'third_party/fbgemm/external/json' 2025-12-04T10:14:46.2955183Z Entering 'third_party/flash-attention' 2025-12-04T10:14:46.2979735Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-12-04T10:14:46.3036186Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-12-04T10:14:46.3067295Z Entering 'third_party/flatbuffers' 2025-12-04T10:14:46.3095391Z Entering 'third_party/fmt' 2025-12-04T10:14:46.3118223Z Entering 'third_party/gemmlowp/gemmlowp' 2025-12-04T10:14:46.3140088Z Entering 'third_party/gloo' 2025-12-04T10:14:46.3167453Z Entering 'third_party/googletest' 2025-12-04T10:14:46.3188665Z Entering 'third_party/ideep' 2025-12-04T10:14:46.3223468Z Entering 'third_party/ideep/mkl-dnn' 2025-12-04T10:14:46.3270124Z Entering 'third_party/ittapi' 2025-12-04T10:14:46.3291862Z Entering 'third_party/kineto' 2025-12-04T10:14:46.3314573Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-12-04T10:14:46.3339561Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-12-04T10:14:46.3386881Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-12-04T10:14:46.3407619Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-12-04T10:14:46.3425812Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-12-04T10:14:46.3443714Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-12-04T10:14:46.3466491Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-12-04T10:14:46.3486419Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-12-04T10:14:46.3525464Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-12-04T10:14:46.3546386Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-12-04T10:14:46.3574171Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp' 2025-12-04T10:14:46.3594273Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:46.3642905Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:46.3665679Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-12-04T10:14:46.3711956Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-12-04T10:14:46.3752829Z Entering 'third_party/kleidiai' 2025-12-04T10:14:46.3784216Z Entering 'third_party/mimalloc' 2025-12-04T10:14:46.3806695Z Entering 'third_party/nlohmann' 2025-12-04T10:14:46.3835745Z Entering 'third_party/onnx' 2025-12-04T10:14:46.3863218Z Entering 'third_party/onnx/third_party/pybind11' 2025-12-04T10:14:46.3887029Z Entering 'third_party/opentelemetry-cpp' 2025-12-04T10:14:46.3909608Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-12-04T10:14:46.3941222Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-12-04T10:14:46.3974736Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-12-04T10:14:46.3994600Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-12-04T10:14:46.4015473Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-12-04T10:14:46.4044361Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-12-04T10:14:46.4064325Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-12-04T10:14:46.4094833Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:46.4116883Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:46.4151229Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-12-04T10:14:46.4185244Z Entering 'third_party/pocketfft' 2025-12-04T10:14:46.4205802Z Entering 'third_party/protobuf' 2025-12-04T10:14:46.4233399Z Entering 'third_party/protobuf/third_party/benchmark' 2025-12-04T10:14:46.4274116Z Entering 'third_party/protobuf/third_party/googletest' 2025-12-04T10:14:46.4296804Z Entering 'third_party/psimd' 2025-12-04T10:14:46.4328026Z Entering 'third_party/pthreadpool' 2025-12-04T10:14:46.4349453Z Entering 'third_party/pybind11' 2025-12-04T10:14:46.4370091Z Entering 'third_party/python-peachpy' 2025-12-04T10:14:46.4391277Z Entering 'third_party/sleef' 2025-12-04T10:14:46.4412621Z Entering 'third_party/tensorpipe' 2025-12-04T10:14:46.4433136Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-12-04T10:14:46.4462921Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-12-04T10:14:46.4492196Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-12-04T10:14:46.4510849Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-12-04T10:14:46.4540248Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-12-04T10:14:46.4578551Z ##[endgroup] 2025-12-04T10:14:46.4774644Z [command]/usr/bin/git log -1 --format=%H 2025-12-04T10:14:46.4870271Z ffd9b0fb4355e97af82fc42cf185c3ffa0fc0a32 2025-12-04T10:14:46.5122953Z ##[group]Run actions/checkout@v4 2025-12-04T10:14:46.5123324Z with: 2025-12-04T10:14:46.5123653Z ref: ffd9b0fb4355e97af82fc42cf185c3ffa0fc0a32 2025-12-04T10:14:46.5124083Z fetch-depth: 0 2025-12-04T10:14:46.5124373Z submodules: recursive 2025-12-04T10:14:46.5124690Z show-progress: false 2025-12-04T10:14:46.5125035Z repository: pytorch/pytorch 2025-12-04T10:14:46.5125498Z token: *** 2025-12-04T10:14:46.5125769Z ssh-strict: true 2025-12-04T10:14:46.5126039Z ssh-user: git 2025-12-04T10:14:46.5126338Z persist-credentials: true 2025-12-04T10:14:46.5126664Z clean: true 2025-12-04T10:14:46.5126976Z sparse-checkout-cone-mode: true 2025-12-04T10:14:46.5127336Z fetch-tags: false 2025-12-04T10:14:46.5127610Z lfs: false 2025-12-04T10:14:46.5127888Z set-safe-directory: true 2025-12-04T10:14:46.5128202Z env: 2025-12-04T10:14:46.5128469Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:14:46.5128784Z ##[endgroup] 2025-12-04T10:14:46.5620903Z Syncing repository: pytorch/pytorch 2025-12-04T10:14:46.5621604Z ##[group]Getting Git version info 2025-12-04T10:14:46.5622092Z Working directory is '/home/runner/_work/pytorch/pytorch' 2025-12-04T10:14:46.5634137Z [command]/usr/bin/git version 2025-12-04T10:14:46.5660111Z git version 2.52.0 2025-12-04T10:14:46.5675471Z ##[endgroup] 2025-12-04T10:14:46.5680067Z Copying '/home/runner/.gitconfig' to '/home/runner/_work/_temp/0c57a030-44bd-422b-a52e-71070bade6bb/.gitconfig' 2025-12-04T10:14:46.5685628Z Temporarily overriding HOME='/home/runner/_work/_temp/0c57a030-44bd-422b-a52e-71070bade6bb' before making global git config changes 2025-12-04T10:14:46.5686673Z Adding repository directory to the temporary git global config as a safe directory 2025-12-04T10:14:46.5688018Z [command]/usr/bin/git config --global --add safe.directory /home/runner/_work/pytorch/pytorch 2025-12-04T10:14:46.5721289Z [command]/usr/bin/git config --local --get remote.origin.url 2025-12-04T10:14:46.5749396Z https://github.com/pytorch/pytorch 2025-12-04T10:14:46.5762139Z ##[group]Removing previously created refs, to avoid conflicts 2025-12-04T10:14:46.5764955Z [command]/usr/bin/git rev-parse --symbolic-full-name --verify --quiet HEAD 2025-12-04T10:14:46.5789511Z HEAD 2025-12-04T10:14:46.5828535Z ##[endgroup] 2025-12-04T10:14:46.5829308Z [command]/usr/bin/git submodule status 2025-12-04T10:14:46.6148469Z 7e1e1fe3858c63c251c637ae41a20de425dde96f android/libs/fbjni (v0.1.0-12-g7e1e1fe) 2025-12-04T10:14:46.6232318Z 4dfe081cf6bcd15db339cf2680b9281b8451eeb3 third_party/FP16 (4dfe081) 2025-12-04T10:14:46.6337583Z b408327ac2a15ec3e43352421954f5b1967701d1 third_party/FXdiv (b408327) 2025-12-04T10:14:46.6424502Z c07e3a0400713d546e0dea2d5466dd22ea389c73 third_party/NNPACK (c07e3a0) 2025-12-04T10:14:46.6477530Z 3ebbc93ded7285963bff932c678fa367eb393ba6 third_party/NVTX (v3.1.0-313-g3ebbc93) 2025-12-04T10:14:46.6548765Z 1d8f600fd424278486eade7ed3e877c99f0846b1 third_party/VulkanMemoryAllocator (v2.1.0-982-g1d8f600) 2025-12-04T10:14:46.6917165Z 51a0103656eff6fc9bfd39a4597923c4b542c883 third_party/XNNPACK (remotes/origin/ds/ndk-1243-g51a0103656) 2025-12-04T10:14:46.6962978Z 01aae101b9e5e94d6c16a9514c9fb8df99c93150 third_party/aiter (v0.1.1-92-g01aae101) 2025-12-04T10:14:46.7000966Z 299e5928955cc62af9968370293b916f5130916f third_party/benchmark (v1.9.3) 2025-12-04T10:14:46.7075953Z 7fe50dc3da2069d6645d9deb8c017a876472a977 third_party/composable_kernel (rocm-6.4.3-459-g7fe50dc3d) 2025-12-04T10:14:46.7196780Z 89c932f313c6437c38f2982869beacc89c2f2246 third_party/cpp-httplib (v0.26.0) 2025-12-04T10:14:46.7323169Z f858c30bcb16f8effd5ff46996f0514539e17abc third_party/cpuinfo (f858c30) 2025-12-04T10:14:46.7375232Z 0b1577c8c83401237d601d0d0db5210506705396 third_party/cudnn_frontend (v0.5-61-g0b1577c) 2025-12-04T10:14:46.7459862Z f88806b1e31dfa579842638740216dd41fc6c588 third_party/cutlass (v4.3.1) 2025-12-04T10:14:46.7509363Z c0b988d39a9e47c794d699f29930ed4d7c7e13a4 third_party/fbgemm (v1.4.0-rc1-2-gc0b988d39) 2025-12-04T10:14:46.7600138Z 979702c87a8713a8e0a5e9fee122b90d2ef13be5 third_party/flash-attention (v2.7.4) 2025-12-04T10:14:46.7616647Z a2cd1ea3b6d3fee220106b5fed3f7ce8da9eb757 third_party/flatbuffers (v24.12.23) 2025-12-04T10:14:46.7872480Z 407c905e45ad75fc29bf0f9bb7c5c2fd3475976f third_party/fmt (12.1.0) 2025-12-04T10:14:46.7982809Z 3fb5c176c17c765a3492cd2f0321b0dab712f350 third_party/gemmlowp/gemmlowp (remotes/origin/revert-87-master-135-g3fb5c17) 2025-12-04T10:14:46.8128977Z 54cbae0d3a67fa890b4c3d9ee162b7860315e341 third_party/gloo (remotes/origin/gh/c-p-i-o/1/base-37-g54cbae0) 2025-12-04T10:14:46.8287161Z 52eb8108c5bdec04579160ae17225d66034bd723 third_party/googletest (release-1.8.0-3544-g52eb8108) 2025-12-04T10:14:46.8380803Z 719d8e6cd7f7a0e01b155657526d693acf97c2b3 third_party/ideep (pytorch-rls-v3.7.1) 2025-12-04T10:14:46.8451338Z dec1d23ca65ab069d225dfe40dea14f455170959 third_party/ittapi (v3.25.5) 2025-12-04T10:14:46.8643017Z 31f85df8fbd89c188f14ef10f1ec65379786b943 third_party/kineto (heads/main) 2025-12-04T10:14:46.8682327Z d7770c89632329a9914ef1a90289917597639cbe third_party/kleidiai (v1.15.0) 2025-12-04T10:14:46.8720398Z fbd8b99c2b828428947d70fdc046bb55609be93e third_party/mimalloc (v2.2.4) 2025-12-04T10:14:46.8756786Z 55f93686c01528224f448c19128836e7df245f72 third_party/nlohmann (v3.12.0) 2025-12-04T10:14:46.8983946Z e709452ef2bbc1d113faf678c24e6d3467696e83 third_party/onnx (v1.18.0) 2025-12-04T10:14:46.9011901Z a799f4aed9c94b765dcdaabaeab7d5e7e2310878 third_party/opentelemetry-cpp (v1.14.2) 2025-12-04T10:14:46.9047125Z 0fa0ef591e38c2758e3184c6c23e497b9f732ffa third_party/pocketfft (release_for_eigen-40-g0fa0ef5) 2025-12-04T10:14:46.9294791Z d1eca4e4b421cd2997495c4b4e65cea6be4e9b8a third_party/protobuf (v3.7.0-rc.2-1279-gd1eca4e4b) 2025-12-04T10:14:46.9391185Z 072586a71b55b7f8c584153d223e95687148a900 third_party/psimd (heads/master) 2025-12-04T10:14:46.9455017Z 4fe0e1e183925bf8cfa6aae24237e724a96479b8 third_party/pthreadpool (0.1-144-g4fe0e1e) 2025-12-04T10:14:46.9491242Z f5fbe867d2d26e4a0a9177a51f6e568868ad3dc8 third_party/pybind11 (v3.0.1) 2025-12-04T10:14:46.9586626Z f45429b087dd7d5bc78bb40dc7cf06425c252d67 third_party/python-peachpy (remotes/origin/pre-generated) 2025-12-04T10:14:46.9655402Z 5a1d179df9cf652951b59010a2d2075372d67f68 third_party/sleef (3.8) 2025-12-04T10:14:46.9738408Z 2b4cd91092d335a697416b2a3cb398283246849d third_party/tensorpipe (heads/main) 2025-12-04T10:14:46.9751717Z ##[group]Cleaning the repository 2025-12-04T10:14:46.9757683Z [command]/usr/bin/git clean -ffdx 2025-12-04T10:14:46.9889086Z [command]/usr/bin/git reset --hard HEAD 2025-12-04T10:14:47.0609601Z HEAD is now at ffd9b0fb4355 Resolve collective autotuning test failure on arm (#168919) 2025-12-04T10:14:47.0654845Z ##[endgroup] 2025-12-04T10:14:47.0658139Z ##[group]Disabling automatic garbage collection 2025-12-04T10:14:47.0666251Z [command]/usr/bin/git config --local gc.auto 0 2025-12-04T10:14:47.0706130Z ##[endgroup] 2025-12-04T10:14:47.0706625Z ##[group]Setting up auth 2025-12-04T10:14:47.0716481Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand 2025-12-04T10:14:47.0765211Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :" 2025-12-04T10:14:47.1090920Z Entering 'android/libs/fbjni' 2025-12-04T10:14:47.1127798Z Entering 'third_party/FP16' 2025-12-04T10:14:47.1178594Z Entering 'third_party/FXdiv' 2025-12-04T10:14:47.1224681Z Entering 'third_party/NNPACK' 2025-12-04T10:14:47.1251901Z Entering 'third_party/NVTX' 2025-12-04T10:14:47.1308745Z Entering 'third_party/VulkanMemoryAllocator' 2025-12-04T10:14:47.1361252Z Entering 'third_party/XNNPACK' 2025-12-04T10:14:47.1405541Z Entering 'third_party/aiter' 2025-12-04T10:14:47.1446167Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-12-04T10:14:47.1475262Z Entering 'third_party/benchmark' 2025-12-04T10:14:47.1500569Z Entering 'third_party/composable_kernel' 2025-12-04T10:14:47.1544609Z Entering 'third_party/cpp-httplib' 2025-12-04T10:14:47.1589531Z Entering 'third_party/cpuinfo' 2025-12-04T10:14:47.1630306Z Entering 'third_party/cudnn_frontend' 2025-12-04T10:14:47.1673793Z Entering 'third_party/cutlass' 2025-12-04T10:14:47.1702991Z Entering 'third_party/fbgemm' 2025-12-04T10:14:47.1744270Z Entering 'third_party/fbgemm/external/asmjit' 2025-12-04T10:14:47.1767980Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-12-04T10:14:47.1815026Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-12-04T10:14:47.1848539Z Entering 'third_party/fbgemm/external/cutlass' 2025-12-04T10:14:47.1896243Z Entering 'third_party/fbgemm/external/googletest' 2025-12-04T10:14:47.1930669Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-12-04T10:14:47.1976473Z Entering 'third_party/fbgemm/external/json' 2025-12-04T10:14:47.2024768Z Entering 'third_party/flash-attention' 2025-12-04T10:14:47.2076680Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-12-04T10:14:47.2112268Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-12-04T10:14:47.2156546Z Entering 'third_party/flatbuffers' 2025-12-04T10:14:47.2197144Z Entering 'third_party/fmt' 2025-12-04T10:14:47.2240760Z Entering 'third_party/gemmlowp/gemmlowp' 2025-12-04T10:14:47.2282918Z Entering 'third_party/gloo' 2025-12-04T10:14:47.2323590Z Entering 'third_party/googletest' 2025-12-04T10:14:47.2363780Z Entering 'third_party/ideep' 2025-12-04T10:14:47.2412910Z Entering 'third_party/ideep/mkl-dnn' 2025-12-04T10:14:47.2460901Z Entering 'third_party/ittapi' 2025-12-04T10:14:47.2493172Z Entering 'third_party/kineto' 2025-12-04T10:14:47.2520844Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-12-04T10:14:47.2558894Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-12-04T10:14:47.2606828Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-12-04T10:14:47.2651897Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-12-04T10:14:47.2701340Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-12-04T10:14:47.2740442Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-12-04T10:14:47.2798404Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-12-04T10:14:47.2834924Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-12-04T10:14:47.2892044Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-12-04T10:14:47.2932110Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-12-04T10:14:47.2972076Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp' 2025-12-04T10:14:47.3016934Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:47.3052713Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:47.3100868Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-12-04T10:14:47.3129886Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-12-04T10:14:47.3161938Z Entering 'third_party/kleidiai' 2025-12-04T10:14:47.3204417Z Entering 'third_party/mimalloc' 2025-12-04T10:14:47.3262537Z Entering 'third_party/nlohmann' 2025-12-04T10:14:47.3297610Z Entering 'third_party/onnx' 2025-12-04T10:14:47.3360376Z Entering 'third_party/onnx/third_party/pybind11' 2025-12-04T10:14:47.3402315Z Entering 'third_party/opentelemetry-cpp' 2025-12-04T10:14:47.3445392Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-12-04T10:14:47.3484713Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-12-04T10:14:47.3529088Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-12-04T10:14:47.3573587Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-12-04T10:14:47.3617057Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-12-04T10:14:47.3647504Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-12-04T10:14:47.3688810Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-12-04T10:14:47.3713860Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:47.3749189Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:47.3789871Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-12-04T10:14:47.3839255Z Entering 'third_party/pocketfft' 2025-12-04T10:14:47.3889809Z Entering 'third_party/protobuf' 2025-12-04T10:14:47.3937463Z Entering 'third_party/protobuf/third_party/benchmark' 2025-12-04T10:14:47.3972954Z Entering 'third_party/protobuf/third_party/googletest' 2025-12-04T10:14:47.4008742Z Entering 'third_party/psimd' 2025-12-04T10:14:47.4044155Z Entering 'third_party/pthreadpool' 2025-12-04T10:14:47.4085385Z Entering 'third_party/pybind11' 2025-12-04T10:14:47.4136960Z Entering 'third_party/python-peachpy' 2025-12-04T10:14:47.4173126Z Entering 'third_party/sleef' 2025-12-04T10:14:47.4218397Z Entering 'third_party/tensorpipe' 2025-12-04T10:14:47.4260967Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-12-04T10:14:47.4280418Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-12-04T10:14:47.4328079Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-12-04T10:14:47.4361756Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-12-04T10:14:47.4410375Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-12-04T10:14:47.4486812Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader 2025-12-04T10:14:47.4510150Z http.https://github.com/.extraheader 2025-12-04T10:14:47.4528462Z [command]/usr/bin/git config --local --unset-all http.https://github.com/.extraheader 2025-12-04T10:14:47.4564652Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || :" 2025-12-04T10:14:47.4800546Z Entering 'android/libs/fbjni' 2025-12-04T10:14:47.4829571Z http.https://github.com/.extraheader 2025-12-04T10:14:47.4857118Z Entering 'third_party/FP16' 2025-12-04T10:14:47.4874383Z http.https://github.com/.extraheader 2025-12-04T10:14:47.4890651Z Entering 'third_party/FXdiv' 2025-12-04T10:14:47.4907380Z http.https://github.com/.extraheader 2025-12-04T10:14:47.4926205Z Entering 'third_party/NNPACK' 2025-12-04T10:14:47.4940438Z http.https://github.com/.extraheader 2025-12-04T10:14:47.4959831Z Entering 'third_party/NVTX' 2025-12-04T10:14:47.4975062Z http.https://github.com/.extraheader 2025-12-04T10:14:47.5004389Z Entering 'third_party/VulkanMemoryAllocator' 2025-12-04T10:14:47.5017729Z http.https://github.com/.extraheader 2025-12-04T10:14:47.5058678Z Entering 'third_party/XNNPACK' 2025-12-04T10:14:47.5073400Z http.https://github.com/.extraheader 2025-12-04T10:14:47.5110342Z Entering 'third_party/aiter' 2025-12-04T10:14:47.5123874Z http.https://github.com/.extraheader 2025-12-04T10:14:47.5154336Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-12-04T10:14:47.5182361Z http.https://github.com/.extraheader 2025-12-04T10:14:47.5218616Z Entering 'third_party/benchmark' 2025-12-04T10:14:47.5248446Z http.https://github.com/.extraheader 2025-12-04T10:14:47.5289618Z Entering 'third_party/composable_kernel' 2025-12-04T10:14:47.5305106Z http.https://github.com/.extraheader 2025-12-04T10:14:47.5329406Z Entering 'third_party/cpp-httplib' 2025-12-04T10:14:47.5358533Z http.https://github.com/.extraheader 2025-12-04T10:14:47.5394336Z Entering 'third_party/cpuinfo' 2025-12-04T10:14:47.5411234Z http.https://github.com/.extraheader 2025-12-04T10:14:47.5442947Z Entering 'third_party/cudnn_frontend' 2025-12-04T10:14:47.5468581Z http.https://github.com/.extraheader 2025-12-04T10:14:47.5489043Z Entering 'third_party/cutlass' 2025-12-04T10:14:47.5502616Z http.https://github.com/.extraheader 2025-12-04T10:14:47.5539128Z Entering 'third_party/fbgemm' 2025-12-04T10:14:47.5566040Z http.https://github.com/.extraheader 2025-12-04T10:14:47.5596086Z Entering 'third_party/fbgemm/external/asmjit' 2025-12-04T10:14:47.5608683Z http.https://github.com/.extraheader 2025-12-04T10:14:47.5625492Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-12-04T10:14:47.5647856Z http.https://github.com/.extraheader 2025-12-04T10:14:47.5681870Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-12-04T10:14:47.5709479Z http.https://github.com/.extraheader 2025-12-04T10:14:47.5726440Z Entering 'third_party/fbgemm/external/cutlass' 2025-12-04T10:14:47.5743093Z http.https://github.com/.extraheader 2025-12-04T10:14:47.5777338Z Entering 'third_party/fbgemm/external/googletest' 2025-12-04T10:14:47.5790281Z http.https://github.com/.extraheader 2025-12-04T10:14:47.5828735Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-12-04T10:14:47.5859320Z http.https://github.com/.extraheader 2025-12-04T10:14:47.5888483Z Entering 'third_party/fbgemm/external/json' 2025-12-04T10:14:47.5920468Z http.https://github.com/.extraheader 2025-12-04T10:14:47.5952366Z Entering 'third_party/flash-attention' 2025-12-04T10:14:47.5966332Z http.https://github.com/.extraheader 2025-12-04T10:14:47.5995022Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-12-04T10:14:47.6015807Z http.https://github.com/.extraheader 2025-12-04T10:14:47.6049662Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-12-04T10:14:47.6078130Z http.https://github.com/.extraheader 2025-12-04T10:14:47.6115697Z Entering 'third_party/flatbuffers' 2025-12-04T10:14:47.6134634Z http.https://github.com/.extraheader 2025-12-04T10:14:47.6164176Z Entering 'third_party/fmt' 2025-12-04T10:14:47.6184223Z http.https://github.com/.extraheader 2025-12-04T10:14:47.6212797Z Entering 'third_party/gemmlowp/gemmlowp' 2025-12-04T10:14:47.6228146Z http.https://github.com/.extraheader 2025-12-04T10:14:47.6246820Z Entering 'third_party/gloo' 2025-12-04T10:14:47.6266379Z http.https://github.com/.extraheader 2025-12-04T10:14:47.6283424Z Entering 'third_party/googletest' 2025-12-04T10:14:47.6299767Z http.https://github.com/.extraheader 2025-12-04T10:14:47.6317945Z Entering 'third_party/ideep' 2025-12-04T10:14:47.6340815Z http.https://github.com/.extraheader 2025-12-04T10:14:47.6368458Z Entering 'third_party/ideep/mkl-dnn' 2025-12-04T10:14:47.6393503Z http.https://github.com/.extraheader 2025-12-04T10:14:47.6413651Z Entering 'third_party/ittapi' 2025-12-04T10:14:47.6426873Z http.https://github.com/.extraheader 2025-12-04T10:14:47.6457192Z Entering 'third_party/kineto' 2025-12-04T10:14:47.6471043Z http.https://github.com/.extraheader 2025-12-04T10:14:47.6501238Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-12-04T10:14:47.6528603Z http.https://github.com/.extraheader 2025-12-04T10:14:47.6546231Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-12-04T10:14:47.6567721Z http.https://github.com/.extraheader 2025-12-04T10:14:47.6598636Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-12-04T10:14:47.6633342Z http.https://github.com/.extraheader 2025-12-04T10:14:47.6663536Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-12-04T10:14:47.6676068Z http.https://github.com/.extraheader 2025-12-04T10:14:47.6694287Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-12-04T10:14:47.6706616Z http.https://github.com/.extraheader 2025-12-04T10:14:47.6723050Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-12-04T10:14:47.6734957Z http.https://github.com/.extraheader 2025-12-04T10:14:47.6767863Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-12-04T10:14:47.6780874Z http.https://github.com/.extraheader 2025-12-04T10:14:47.6810551Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-12-04T10:14:47.6823396Z http.https://github.com/.extraheader 2025-12-04T10:14:47.6851659Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-12-04T10:14:47.6876504Z http.https://github.com/.extraheader 2025-12-04T10:14:47.6907778Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-12-04T10:14:47.6941486Z http.https://github.com/.extraheader 2025-12-04T10:14:47.6972077Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp' 2025-12-04T10:14:47.7001678Z http.https://github.com/.extraheader 2025-12-04T10:14:47.7031505Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:47.7055853Z http.https://github.com/.extraheader 2025-12-04T10:14:47.7084912Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:47.7098712Z http.https://github.com/.extraheader 2025-12-04T10:14:47.7121134Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-12-04T10:14:47.7146618Z http.https://github.com/.extraheader 2025-12-04T10:14:47.7178455Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-12-04T10:14:47.7194840Z http.https://github.com/.extraheader 2025-12-04T10:14:47.7216217Z Entering 'third_party/kleidiai' 2025-12-04T10:14:47.7233643Z http.https://github.com/.extraheader 2025-12-04T10:14:47.7265647Z Entering 'third_party/mimalloc' 2025-12-04T10:14:47.7296015Z http.https://github.com/.extraheader 2025-12-04T10:14:47.7324391Z Entering 'third_party/nlohmann' 2025-12-04T10:14:47.7349754Z http.https://github.com/.extraheader 2025-12-04T10:14:47.7394875Z Entering 'third_party/onnx' 2025-12-04T10:14:47.7436383Z http.https://github.com/.extraheader 2025-12-04T10:14:47.7472726Z Entering 'third_party/onnx/third_party/pybind11' 2025-12-04T10:14:47.7502473Z http.https://github.com/.extraheader 2025-12-04T10:14:47.7545098Z Entering 'third_party/opentelemetry-cpp' 2025-12-04T10:14:47.7564968Z http.https://github.com/.extraheader 2025-12-04T10:14:47.7594141Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-12-04T10:14:47.7624690Z http.https://github.com/.extraheader 2025-12-04T10:14:47.7658122Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-12-04T10:14:47.7685162Z http.https://github.com/.extraheader 2025-12-04T10:14:47.7717367Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-12-04T10:14:47.7737712Z http.https://github.com/.extraheader 2025-12-04T10:14:47.7763249Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-12-04T10:14:47.7793467Z http.https://github.com/.extraheader 2025-12-04T10:14:47.7838067Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-12-04T10:14:47.7864189Z http.https://github.com/.extraheader 2025-12-04T10:14:47.7898842Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-12-04T10:14:47.7920209Z http.https://github.com/.extraheader 2025-12-04T10:14:47.7940074Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-12-04T10:14:47.7957483Z http.https://github.com/.extraheader 2025-12-04T10:14:47.7983733Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:47.8007141Z http.https://github.com/.extraheader 2025-12-04T10:14:47.8043231Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:47.8071085Z http.https://github.com/.extraheader 2025-12-04T10:14:47.8103219Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-12-04T10:14:47.8117772Z http.https://github.com/.extraheader 2025-12-04T10:14:47.8145640Z Entering 'third_party/pocketfft' 2025-12-04T10:14:47.8163021Z http.https://github.com/.extraheader 2025-12-04T10:14:47.8185054Z Entering 'third_party/protobuf' 2025-12-04T10:14:47.8208863Z http.https://github.com/.extraheader 2025-12-04T10:14:47.8230174Z Entering 'third_party/protobuf/third_party/benchmark' 2025-12-04T10:14:47.8249357Z http.https://github.com/.extraheader 2025-12-04T10:14:47.8278335Z Entering 'third_party/protobuf/third_party/googletest' 2025-12-04T10:14:47.8308128Z http.https://github.com/.extraheader 2025-12-04T10:14:47.8351506Z Entering 'third_party/psimd' 2025-12-04T10:14:47.8367266Z http.https://github.com/.extraheader 2025-12-04T10:14:47.8386252Z Entering 'third_party/pthreadpool' 2025-12-04T10:14:47.8407569Z http.https://github.com/.extraheader 2025-12-04T10:14:47.8436219Z Entering 'third_party/pybind11' 2025-12-04T10:14:47.8454259Z http.https://github.com/.extraheader 2025-12-04T10:14:47.8473017Z Entering 'third_party/python-peachpy' 2025-12-04T10:14:47.8491548Z http.https://github.com/.extraheader 2025-12-04T10:14:47.8510983Z Entering 'third_party/sleef' 2025-12-04T10:14:47.8524927Z http.https://github.com/.extraheader 2025-12-04T10:14:47.8553996Z Entering 'third_party/tensorpipe' 2025-12-04T10:14:47.8567540Z http.https://github.com/.extraheader 2025-12-04T10:14:47.8586603Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-12-04T10:14:47.8615457Z http.https://github.com/.extraheader 2025-12-04T10:14:47.8644492Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-12-04T10:14:47.8657345Z http.https://github.com/.extraheader 2025-12-04T10:14:47.8694342Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-12-04T10:14:47.8720156Z http.https://github.com/.extraheader 2025-12-04T10:14:47.8748150Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-12-04T10:14:47.8776552Z http.https://github.com/.extraheader 2025-12-04T10:14:47.8804765Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-12-04T10:14:47.8817892Z http.https://github.com/.extraheader 2025-12-04T10:14:47.8868282Z [command]/usr/bin/git config --local --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:47.8886168Z [command]/usr/bin/git submodule foreach --recursive git config --local --show-origin --name-only --get-regexp remote.origin.url 2025-12-04T10:14:47.9085131Z Entering 'android/libs/fbjni' 2025-12-04T10:14:47.9096741Z file:/home/runner/_work/pytorch/pytorch/.git/modules/android/libs/fbjni/config remote.origin.url 2025-12-04T10:14:47.9118784Z Entering 'third_party/FP16' 2025-12-04T10:14:47.9136252Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FP16/config remote.origin.url 2025-12-04T10:14:47.9146789Z Entering 'third_party/FXdiv' 2025-12-04T10:14:47.9172985Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FXdiv/config remote.origin.url 2025-12-04T10:14:47.9184628Z Entering 'third_party/NNPACK' 2025-12-04T10:14:47.9195689Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK/config remote.origin.url 2025-12-04T10:14:47.9211390Z Entering 'third_party/NVTX' 2025-12-04T10:14:47.9232987Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NVTX/config remote.origin.url 2025-12-04T10:14:47.9258008Z Entering 'third_party/VulkanMemoryAllocator' 2025-12-04T10:14:47.9269649Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/VulkanMemoryAllocator/config remote.origin.url 2025-12-04T10:14:47.9289902Z Entering 'third_party/XNNPACK' 2025-12-04T10:14:47.9315516Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/XNNPACK/config remote.origin.url 2025-12-04T10:14:47.9341054Z Entering 'third_party/aiter' 2025-12-04T10:14:47.9365578Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/aiter/config remote.origin.url 2025-12-04T10:14:47.9384823Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-12-04T10:14:47.9406040Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/aiter/modules/3rdparty/composable_kernel/config remote.origin.url 2025-12-04T10:14:47.9424157Z Entering 'third_party/benchmark' 2025-12-04T10:14:47.9436755Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/benchmark/config remote.origin.url 2025-12-04T10:14:47.9446050Z Entering 'third_party/composable_kernel' 2025-12-04T10:14:47.9473029Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/composable_kernel/config remote.origin.url 2025-12-04T10:14:47.9502810Z Entering 'third_party/cpp-httplib' 2025-12-04T10:14:47.9523534Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/cpp-httplib/config remote.origin.url 2025-12-04T10:14:47.9546677Z Entering 'third_party/cpuinfo' 2025-12-04T10:14:47.9571795Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/cpuinfo/config remote.origin.url 2025-12-04T10:14:47.9582639Z Entering 'third_party/cudnn_frontend' 2025-12-04T10:14:47.9608727Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/cudnn_frontend/config remote.origin.url 2025-12-04T10:14:47.9620773Z Entering 'third_party/cutlass' 2025-12-04T10:14:47.9646804Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/cutlass/config remote.origin.url 2025-12-04T10:14:47.9670179Z Entering 'third_party/fbgemm' 2025-12-04T10:14:47.9687469Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/config remote.origin.url 2025-12-04T10:14:47.9711512Z Entering 'third_party/fbgemm/external/asmjit' 2025-12-04T10:14:47.9731060Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/asmjit/config remote.origin.url 2025-12-04T10:14:47.9750383Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-12-04T10:14:47.9771055Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/composable_kernel/config remote.origin.url 2025-12-04T10:14:47.9793755Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-12-04T10:14:47.9819681Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/cpuinfo/config remote.origin.url 2025-12-04T10:14:47.9830959Z Entering 'third_party/fbgemm/external/cutlass' 2025-12-04T10:14:47.9841843Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/cutlass/config remote.origin.url 2025-12-04T10:14:47.9870581Z Entering 'third_party/fbgemm/external/googletest' 2025-12-04T10:14:47.9890375Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/googletest/config remote.origin.url 2025-12-04T10:14:47.9900979Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-12-04T10:14:47.9920981Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/hipify_torch/config remote.origin.url 2025-12-04T10:14:47.9934016Z Entering 'third_party/fbgemm/external/json' 2025-12-04T10:14:47.9944327Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/json/config remote.origin.url 2025-12-04T10:14:47.9957044Z Entering 'third_party/flash-attention' 2025-12-04T10:14:47.9974784Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/config remote.origin.url 2025-12-04T10:14:47.9985995Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-12-04T10:14:47.9996544Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/modules/csrc/composable_kernel/config remote.origin.url 2025-12-04T10:14:48.0063887Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-12-04T10:14:48.0064893Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/modules/csrc/cutlass/config remote.origin.url 2025-12-04T10:14:48.0065742Z Entering 'third_party/flatbuffers' 2025-12-04T10:14:48.0066445Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/flatbuffers/config remote.origin.url 2025-12-04T10:14:48.0085255Z Entering 'third_party/fmt' 2025-12-04T10:14:48.0101026Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fmt/config remote.origin.url 2025-12-04T10:14:48.0122830Z Entering 'third_party/gemmlowp/gemmlowp' 2025-12-04T10:14:48.0151713Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/gemmlowp/gemmlowp/config remote.origin.url 2025-12-04T10:14:48.0171821Z Entering 'third_party/gloo' 2025-12-04T10:14:48.0197714Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/gloo/config remote.origin.url 2025-12-04T10:14:48.0208130Z Entering 'third_party/googletest' 2025-12-04T10:14:48.0223858Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/googletest/config remote.origin.url 2025-12-04T10:14:48.0244700Z Entering 'third_party/ideep' 2025-12-04T10:14:48.0269461Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/config remote.origin.url 2025-12-04T10:14:48.0290354Z Entering 'third_party/ideep/mkl-dnn' 2025-12-04T10:14:48.0311908Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/modules/mkl-dnn/config remote.origin.url 2025-12-04T10:14:48.0326542Z Entering 'third_party/ittapi' 2025-12-04T10:14:48.0338333Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/ittapi/config remote.origin.url 2025-12-04T10:14:48.0347697Z Entering 'third_party/kineto' 2025-12-04T10:14:48.0358801Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/config remote.origin.url 2025-12-04T10:14:48.0379285Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-12-04T10:14:48.0391509Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/config remote.origin.url 2025-12-04T10:14:48.0411748Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-12-04T10:14:48.0422558Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/DCGM/config remote.origin.url 2025-12-04T10:14:48.0444329Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-12-04T10:14:48.0456301Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/cpr/config remote.origin.url 2025-12-04T10:14:48.0474470Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-12-04T10:14:48.0500248Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/fmt/config remote.origin.url 2025-12-04T10:14:48.0522488Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-12-04T10:14:48.0537889Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/config remote.origin.url 2025-12-04T10:14:48.0546892Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-12-04T10:14:48.0573112Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/modules/doc/config remote.origin.url 2025-12-04T10:14:48.0585105Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-12-04T10:14:48.0610992Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/glog/config remote.origin.url 2025-12-04T10:14:48.0621604Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-12-04T10:14:48.0636484Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/googletest/config remote.origin.url 2025-12-04T10:14:48.0658189Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-12-04T10:14:48.0678030Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/json/config remote.origin.url 2025-12-04T10:14:48.0693851Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-12-04T10:14:48.0704092Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/pfs/config remote.origin.url 2025-12-04T10:14:48.0724156Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp' 2025-12-04T10:14:48.0734194Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/prometheus-cpp/config remote.origin.url 2025-12-04T10:14:48.0742501Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:48.0770098Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/prometheus-cpp/modules/civetweb/config remote.origin.url 2025-12-04T10:14:48.0780556Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:48.0791942Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/prometheus-cpp/modules/googletest/config remote.origin.url 2025-12-04T10:14:48.0805693Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-12-04T10:14:48.0831550Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/fmt/config remote.origin.url 2025-12-04T10:14:48.0852447Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-12-04T10:14:48.0873528Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/googletest/config remote.origin.url 2025-12-04T10:14:48.0894930Z Entering 'third_party/kleidiai' 2025-12-04T10:14:48.0916228Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kleidiai/config remote.origin.url 2025-12-04T10:14:48.0941006Z Entering 'third_party/mimalloc' 2025-12-04T10:14:48.0958400Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/mimalloc/config remote.origin.url 2025-12-04T10:14:48.0972556Z Entering 'third_party/nlohmann' 2025-12-04T10:14:48.0993260Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/nlohmann/config remote.origin.url 2025-12-04T10:14:48.1019040Z Entering 'third_party/onnx' 2025-12-04T10:14:48.1038110Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/config remote.origin.url 2025-12-04T10:14:48.1056879Z Entering 'third_party/onnx/third_party/pybind11' 2025-12-04T10:14:48.1071486Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/modules/third_party/pybind11/config remote.origin.url 2025-12-04T10:14:48.1083121Z Entering 'third_party/opentelemetry-cpp' 2025-12-04T10:14:48.1097560Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/config remote.origin.url 2025-12-04T10:14:48.1106935Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-12-04T10:14:48.1122374Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/benchmark/config remote.origin.url 2025-12-04T10:14:48.1141187Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-12-04T10:14:48.1161029Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/googletest/config remote.origin.url 2025-12-04T10:14:48.1172764Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-12-04T10:14:48.1197202Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/ms-gsl/config remote.origin.url 2025-12-04T10:14:48.1207313Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-12-04T10:14:48.1223691Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/nlohmann-json/config remote.origin.url 2025-12-04T10:14:48.1243939Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-12-04T10:14:48.1260303Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/opentelemetry-proto/config remote.origin.url 2025-12-04T10:14:48.1269782Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-12-04T10:14:48.1284815Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/opentracing-cpp/config remote.origin.url 2025-12-04T10:14:48.1304843Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-12-04T10:14:48.1315086Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/config remote.origin.url 2025-12-04T10:14:48.1323909Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:48.1334076Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/modules/civetweb/config remote.origin.url 2025-12-04T10:14:48.1342856Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:48.1352002Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/modules/googletest/config remote.origin.url 2025-12-04T10:14:48.1374566Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-12-04T10:14:48.1393922Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/tools/vcpkg/config remote.origin.url 2025-12-04T10:14:48.1413779Z Entering 'third_party/pocketfft' 2025-12-04T10:14:48.1434855Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/pocketfft/config remote.origin.url 2025-12-04T10:14:48.1456561Z Entering 'third_party/protobuf' 2025-12-04T10:14:48.1467742Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/config remote.origin.url 2025-12-04T10:14:48.1478338Z Entering 'third_party/protobuf/third_party/benchmark' 2025-12-04T10:14:48.1502713Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/benchmark/config remote.origin.url 2025-12-04T10:14:48.1512034Z Entering 'third_party/protobuf/third_party/googletest' 2025-12-04T10:14:48.1527699Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/googletest/config remote.origin.url 2025-12-04T10:14:48.1538724Z Entering 'third_party/psimd' 2025-12-04T10:14:48.1555488Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/psimd/config remote.origin.url 2025-12-04T10:14:48.1565312Z Entering 'third_party/pthreadpool' 2025-12-04T10:14:48.1579000Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/pthreadpool/config remote.origin.url 2025-12-04T10:14:48.1595663Z Entering 'third_party/pybind11' 2025-12-04T10:14:48.1612545Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/pybind11/config remote.origin.url 2025-12-04T10:14:48.1633160Z Entering 'third_party/python-peachpy' 2025-12-04T10:14:48.1652391Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/python-peachpy/config remote.origin.url 2025-12-04T10:14:48.1673602Z Entering 'third_party/sleef' 2025-12-04T10:14:48.1690269Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/sleef/config remote.origin.url 2025-12-04T10:14:48.1699604Z Entering 'third_party/tensorpipe' 2025-12-04T10:14:48.1717020Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/config remote.origin.url 2025-12-04T10:14:48.1738393Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-12-04T10:14:48.1763921Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/googletest/config remote.origin.url 2025-12-04T10:14:48.1773273Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-12-04T10:14:48.1784324Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libnop/config remote.origin.url 2025-12-04T10:14:48.1793503Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-12-04T10:14:48.1803891Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libuv/config remote.origin.url 2025-12-04T10:14:48.1812648Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-12-04T10:14:48.1823085Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/config remote.origin.url 2025-12-04T10:14:48.1831591Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-12-04T10:14:48.1843199Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/modules/tools/clang/config remote.origin.url 2025-12-04T10:14:48.1880539Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/android/libs/fbjni/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.1910679Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FP16/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.1950375Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FXdiv/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.1975679Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.2003372Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/NVTX/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.2030969Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/VulkanMemoryAllocator/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.2066414Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/XNNPACK/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.2102018Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/aiter/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.2127054Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/aiter/modules/3rdparty/composable_kernel/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.2163093Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/benchmark/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.2198904Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/composable_kernel/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.2222512Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/cpp-httplib/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.2258997Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/cpuinfo/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.2284287Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/cudnn_frontend/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.2319419Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/cutlass/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.2343316Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.2378429Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/asmjit/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.2402787Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/composable_kernel/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.2439086Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/cpuinfo/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.2475488Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/cutlass/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.2499379Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.2533698Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/hipify_torch/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.2568767Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/json/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.2602852Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.2641327Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/modules/csrc/composable_kernel/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.2669331Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/modules/csrc/cutlass/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.2707196Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/flatbuffers/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.2744489Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fmt/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.2781622Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/gemmlowp/gemmlowp/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.2816256Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/gloo/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.2850136Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.2889843Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.2925196Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/modules/mkl-dnn/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.2961963Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/ittapi/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.2986422Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.3011557Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.3058535Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/DCGM/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.3096329Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/cpr/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.3121741Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/fmt/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.3157576Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.3193043Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/modules/doc/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.3228612Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/glog/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.3263452Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.3297609Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/json/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.3321312Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/pfs/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.3345731Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/prometheus-cpp/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.3370472Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/prometheus-cpp/modules/civetweb/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.3402836Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/prometheus-cpp/modules/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.3438808Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/fmt/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.3465222Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.3498967Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kleidiai/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.3524369Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/mimalloc/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.3550527Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/nlohmann/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.3593986Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.3629926Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/modules/third_party/pybind11/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.3655922Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.3691927Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/benchmark/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.3725381Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.3775177Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/ms-gsl/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.3813376Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/nlohmann-json/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.3850034Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/opentelemetry-proto/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.3887875Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/opentracing-cpp/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.3915373Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.3951556Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/modules/civetweb/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.3987630Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/modules/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.4022702Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/tools/vcpkg/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.4058470Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/pocketfft/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.4093071Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.4135501Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/benchmark/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.4171420Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.4209638Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/psimd/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.4246241Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/pthreadpool/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.4283569Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/pybind11/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.4323027Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/python-peachpy/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.4357881Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/sleef/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.4383780Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.4420506Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.4459160Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libnop/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.4485003Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libuv/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.4512253Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.4548321Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/modules/tools/clang/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T10:14:48.4591947Z [command]/usr/bin/git config --local http.https://github.com/.extraheader AUTHORIZATION: basic *** 2025-12-04T10:14:48.4637382Z ##[endgroup] 2025-12-04T10:14:48.4638013Z ##[group]Fetching the repository 2025-12-04T10:14:48.4648556Z [command]/usr/bin/git -c protocol.version=2 fetch --prune --no-recurse-submodules origin +refs/heads/*:refs/remotes/origin/* +refs/tags/*:refs/tags/* 2025-12-04T10:14:50.1371970Z [command]/usr/bin/git rev-parse --verify --quiet ffd9b0fb4355e97af82fc42cf185c3ffa0fc0a32^{object} 2025-12-04T10:14:50.1600146Z ffd9b0fb4355e97af82fc42cf185c3ffa0fc0a32 2025-12-04T10:14:50.1606854Z ##[endgroup] 2025-12-04T10:14:50.1607398Z ##[group]Determining the checkout info 2025-12-04T10:14:50.1609852Z ##[endgroup] 2025-12-04T10:14:50.1617735Z [command]/usr/bin/git sparse-checkout disable 2025-12-04T10:14:50.1735624Z [command]/usr/bin/git config --local --unset-all extensions.worktreeConfig 2025-12-04T10:14:50.1772103Z ##[group]Checking out the ref 2025-12-04T10:14:50.1777582Z [command]/usr/bin/git checkout --progress --force ffd9b0fb4355e97af82fc42cf185c3ffa0fc0a32 2025-12-04T10:14:50.2075728Z HEAD is now at ffd9b0fb4355 Resolve collective autotuning test failure on arm (#168919) 2025-12-04T10:14:50.2080804Z ##[endgroup] 2025-12-04T10:14:50.2080975Z ##[group]Setting up auth for fetching submodules 2025-12-04T10:14:50.2085675Z [command]/usr/bin/git config --global http.https://github.com/.extraheader AUTHORIZATION: basic *** 2025-12-04T10:14:50.2119358Z [command]/usr/bin/git config --global --unset-all url.https://github.com/.insteadOf 2025-12-04T10:14:50.2144518Z [command]/usr/bin/git config --global --add url.https://github.com/.insteadOf git@github.com: 2025-12-04T10:14:50.2177932Z [command]/usr/bin/git config --global --add url.https://github.com/.insteadOf org-21003710@github.com: 2025-12-04T10:14:50.2206372Z ##[endgroup] 2025-12-04T10:14:50.2206862Z ##[group]Fetching submodules 2025-12-04T10:14:50.2207961Z [command]/usr/bin/git submodule sync --recursive 2025-12-04T10:14:50.2464628Z Synchronizing submodule url for 'android/libs/fbjni' 2025-12-04T10:14:50.2477480Z Synchronizing submodule url for 'third_party/FP16' 2025-12-04T10:14:50.2505771Z Synchronizing submodule url for 'third_party/FXdiv' 2025-12-04T10:14:50.2529729Z Synchronizing submodule url for 'third_party/NNPACK' 2025-12-04T10:14:50.2543633Z Synchronizing submodule url for 'third_party/NVTX' 2025-12-04T10:14:50.2555808Z Synchronizing submodule url for 'third_party/VulkanMemoryAllocator' 2025-12-04T10:14:50.2579524Z Synchronizing submodule url for 'third_party/XNNPACK' 2025-12-04T10:14:50.2598269Z Synchronizing submodule url for 'third_party/aiter' 2025-12-04T10:14:50.2611988Z Synchronizing submodule url for 'third_party/aiter/3rdparty/composable_kernel' 2025-12-04T10:14:50.2642306Z Synchronizing submodule url for 'third_party/benchmark' 2025-12-04T10:14:50.2666471Z Synchronizing submodule url for 'third_party/composable_kernel' 2025-12-04T10:14:50.2692744Z Synchronizing submodule url for 'third_party/cpp-httplib' 2025-12-04T10:14:50.2704627Z Synchronizing submodule url for 'third_party/cpuinfo' 2025-12-04T10:14:50.2734703Z Synchronizing submodule url for 'third_party/cudnn_frontend' 2025-12-04T10:14:50.2747418Z Synchronizing submodule url for 'third_party/cutlass' 2025-12-04T10:14:50.2777568Z Synchronizing submodule url for 'third_party/fbgemm' 2025-12-04T10:14:50.2803478Z Synchronizing submodule url for 'third_party/fbgemm/external/asmjit' 2025-12-04T10:14:50.2825632Z Synchronizing submodule url for 'third_party/fbgemm/external/composable_kernel' 2025-12-04T10:14:50.2855836Z Synchronizing submodule url for 'third_party/fbgemm/external/cpuinfo' 2025-12-04T10:14:50.2883759Z Synchronizing submodule url for 'third_party/fbgemm/external/cutlass' 2025-12-04T10:14:50.2897946Z Synchronizing submodule url for 'third_party/fbgemm/external/googletest' 2025-12-04T10:14:50.2920365Z Synchronizing submodule url for 'third_party/fbgemm/external/hipify_torch' 2025-12-04T10:14:50.2929657Z Synchronizing submodule url for 'third_party/fbgemm/external/json' 2025-12-04T10:14:50.2955516Z Synchronizing submodule url for 'third_party/flash-attention' 2025-12-04T10:14:50.2984396Z Synchronizing submodule url for 'third_party/flash-attention/csrc/composable_kernel' 2025-12-04T10:14:50.3006196Z Synchronizing submodule url for 'third_party/flash-attention/csrc/cutlass' 2025-12-04T10:14:50.3032756Z Synchronizing submodule url for 'third_party/flatbuffers' 2025-12-04T10:14:50.3059225Z Synchronizing submodule url for 'third_party/fmt' 2025-12-04T10:14:50.3088706Z Synchronizing submodule url for 'third_party/gemmlowp/gemmlowp' 2025-12-04T10:14:50.3110792Z Synchronizing submodule url for 'third_party/gloo' 2025-12-04T10:14:50.3135028Z Synchronizing submodule url for 'third_party/googletest' 2025-12-04T10:14:50.3157996Z Synchronizing submodule url for 'third_party/ideep' 2025-12-04T10:14:50.3185685Z Synchronizing submodule url for 'third_party/ideep/mkl-dnn' 2025-12-04T10:14:50.3216609Z Synchronizing submodule url for 'third_party/ittapi' 2025-12-04T10:14:50.3228316Z Synchronizing submodule url for 'third_party/kineto' 2025-12-04T10:14:50.3258316Z Synchronizing submodule url for 'third_party/kineto/libkineto/third_party/dynolog' 2025-12-04T10:14:50.3268557Z Synchronizing submodule url for 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-12-04T10:14:50.3280934Z Synchronizing submodule url for 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-12-04T10:14:50.3307841Z Synchronizing submodule url for 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-12-04T10:14:50.3329748Z Synchronizing submodule url for 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-12-04T10:14:50.3349086Z Synchronizing submodule url for 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-12-04T10:14:50.3362904Z Synchronizing submodule url for 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-12-04T10:14:50.3385287Z Synchronizing submodule url for 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-12-04T10:14:50.3396893Z Synchronizing submodule url for 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-12-04T10:14:50.3417122Z Synchronizing submodule url for 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-12-04T10:14:50.3439072Z Synchronizing submodule url for 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp' 2025-12-04T10:14:50.3455928Z Synchronizing submodule url for 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:50.3465748Z Synchronizing submodule url for 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:50.3482770Z Synchronizing submodule url for 'third_party/kineto/libkineto/third_party/fmt' 2025-12-04T10:14:50.3504398Z Synchronizing submodule url for 'third_party/kineto/libkineto/third_party/googletest' 2025-12-04T10:14:50.3518836Z Synchronizing submodule url for 'third_party/kleidiai' 2025-12-04T10:14:50.3529948Z Synchronizing submodule url for 'third_party/mimalloc' 2025-12-04T10:14:50.3541645Z Synchronizing submodule url for 'third_party/nlohmann' 2025-12-04T10:14:50.3563903Z Synchronizing submodule url for 'third_party/onnx' 2025-12-04T10:14:50.3609751Z Synchronizing submodule url for 'third_party/onnx/third_party/pybind11' 2025-12-04T10:14:50.3624003Z Synchronizing submodule url for 'third_party/opentelemetry-cpp' 2025-12-04T10:14:50.3635968Z Synchronizing submodule url for 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-12-04T10:14:50.3645896Z Synchronizing submodule url for 'third_party/opentelemetry-cpp/third_party/googletest' 2025-12-04T10:14:50.3666928Z Synchronizing submodule url for 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-12-04T10:14:50.3686788Z Synchronizing submodule url for 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-12-04T10:14:50.3706888Z Synchronizing submodule url for 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-12-04T10:14:50.3717310Z Synchronizing submodule url for 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-12-04T10:14:50.3737814Z Synchronizing submodule url for 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-12-04T10:14:50.3749411Z Synchronizing submodule url for 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:50.3773918Z Synchronizing submodule url for 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:50.3797811Z Synchronizing submodule url for 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-12-04T10:14:50.3818244Z Synchronizing submodule url for 'third_party/pocketfft' 2025-12-04T10:14:50.3829077Z Synchronizing submodule url for 'third_party/protobuf' 2025-12-04T10:14:50.3868189Z Synchronizing submodule url for 'third_party/protobuf/third_party/benchmark' 2025-12-04T10:14:50.3886126Z Synchronizing submodule url for 'third_party/protobuf/third_party/googletest' 2025-12-04T10:14:50.3903847Z Synchronizing submodule url for 'third_party/psimd' 2025-12-04T10:14:50.3926600Z Synchronizing submodule url for 'third_party/pthreadpool' 2025-12-04T10:14:50.3938466Z Synchronizing submodule url for 'third_party/pybind11' 2025-12-04T10:14:50.3949652Z Synchronizing submodule url for 'third_party/python-peachpy' 2025-12-04T10:14:50.3968016Z Synchronizing submodule url for 'third_party/sleef' 2025-12-04T10:14:50.3979452Z Synchronizing submodule url for 'third_party/tensorpipe' 2025-12-04T10:14:50.3991815Z Synchronizing submodule url for 'third_party/tensorpipe/third_party/googletest' 2025-12-04T10:14:50.4013852Z Synchronizing submodule url for 'third_party/tensorpipe/third_party/libnop' 2025-12-04T10:14:50.4043782Z Synchronizing submodule url for 'third_party/tensorpipe/third_party/libuv' 2025-12-04T10:14:50.4054706Z Synchronizing submodule url for 'third_party/tensorpipe/third_party/pybind11' 2025-12-04T10:14:50.4080993Z Synchronizing submodule url for 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-12-04T10:14:50.4117119Z [command]/usr/bin/git -c protocol.version=2 submodule update --init --force --recursive 2025-12-04T10:14:50.4419095Z Submodule path 'android/libs/fbjni': checked out '7e1e1fe3858c63c251c637ae41a20de425dde96f' 2025-12-04T10:14:50.4487456Z Submodule path 'third_party/FP16': checked out '4dfe081cf6bcd15db339cf2680b9281b8451eeb3' 2025-12-04T10:14:50.4558175Z Submodule path 'third_party/FXdiv': checked out 'b408327ac2a15ec3e43352421954f5b1967701d1' 2025-12-04T10:14:50.4640487Z Submodule path 'third_party/NNPACK': checked out 'c07e3a0400713d546e0dea2d5466dd22ea389c73' 2025-12-04T10:14:50.4707997Z Submodule path 'third_party/NVTX': checked out '3ebbc93ded7285963bff932c678fa367eb393ba6' 2025-12-04T10:14:50.4791449Z Submodule path 'third_party/VulkanMemoryAllocator': checked out '1d8f600fd424278486eade7ed3e877c99f0846b1' 2025-12-04T10:14:50.4950895Z Submodule path 'third_party/XNNPACK': checked out '51a0103656eff6fc9bfd39a4597923c4b542c883' 2025-12-04T10:14:50.5104355Z Submodule path 'third_party/aiter': checked out '01aae101b9e5e94d6c16a9514c9fb8df99c93150' 2025-12-04T10:14:50.5334054Z Submodule path 'third_party/aiter/3rdparty/composable_kernel': checked out 'cffe8fa2a442ac8e80dd236a1a5d24fe3d7e0cbf' 2025-12-04T10:14:50.5433163Z Submodule path 'third_party/benchmark': checked out '299e5928955cc62af9968370293b916f5130916f' 2025-12-04T10:14:50.5669501Z Submodule path 'third_party/composable_kernel': checked out '7fe50dc3da2069d6645d9deb8c017a876472a977' 2025-12-04T10:14:50.5760566Z Submodule path 'third_party/cpp-httplib': checked out '89c932f313c6437c38f2982869beacc89c2f2246' 2025-12-04T10:14:50.5834322Z Submodule path 'third_party/cpuinfo': checked out 'f858c30bcb16f8effd5ff46996f0514539e17abc' 2025-12-04T10:14:50.5941149Z Submodule path 'third_party/cudnn_frontend': checked out '0b1577c8c83401237d601d0d0db5210506705396' 2025-12-04T10:14:50.6101261Z Submodule path 'third_party/cutlass': checked out 'f88806b1e31dfa579842638740216dd41fc6c588' 2025-12-04T10:14:50.6267100Z Submodule path 'third_party/fbgemm': checked out 'c0b988d39a9e47c794d699f29930ed4d7c7e13a4' 2025-12-04T10:14:50.6349562Z Submodule path 'third_party/fbgemm/external/asmjit': checked out 'a3199e8857792cd10b7589ff5d58343d2c9008ea' 2025-12-04T10:14:50.6551698Z Submodule path 'third_party/fbgemm/external/composable_kernel': checked out '7fe50dc3da2069d6645d9deb8c017a876472a977' 2025-12-04T10:14:50.6668682Z Submodule path 'third_party/fbgemm/external/cpuinfo': checked out '6543fec09b2f04ac4a666882998b534afc9c1349' 2025-12-04T10:14:50.6790483Z Submodule path 'third_party/fbgemm/external/cutlass': checked out '98125ce499b0fdf7ffbe0e3052f5b8709f4840f8' 2025-12-04T10:14:50.6853301Z Submodule path 'third_party/fbgemm/external/googletest': checked out '52eb8108c5bdec04579160ae17225d66034bd723' 2025-12-04T10:14:50.6935034Z Submodule path 'third_party/fbgemm/external/hipify_torch': checked out '63b6a7b541fa7f08f8475ca7d74054db36ff2691' 2025-12-04T10:14:50.7085143Z Submodule path 'third_party/fbgemm/external/json': checked out '9cca280a4d0ccf0c08f47a99aa71d1b0e52f8d03' 2025-12-04T10:14:50.7219050Z Submodule path 'third_party/flash-attention': checked out '979702c87a8713a8e0a5e9fee122b90d2ef13be5' 2025-12-04T10:14:50.7428424Z Submodule path 'third_party/flash-attention/csrc/composable_kernel': checked out '888317e698e9803c62bd38568abc9e05d7709f33' 2025-12-04T10:14:50.7576401Z Submodule path 'third_party/flash-attention/csrc/cutlass': checked out 'c506e16788cb08416a4a57e11a9067beeee29420' 2025-12-04T10:14:50.7700915Z Submodule path 'third_party/flatbuffers': checked out 'a2cd1ea3b6d3fee220106b5fed3f7ce8da9eb757' 2025-12-04T10:14:50.7778266Z Submodule path 'third_party/fmt': checked out '407c905e45ad75fc29bf0f9bb7c5c2fd3475976f' 2025-12-04T10:14:50.7842278Z Submodule path 'third_party/gemmlowp/gemmlowp': checked out '3fb5c176c17c765a3492cd2f0321b0dab712f350' 2025-12-04T10:14:50.7939846Z Submodule path 'third_party/gloo': checked out '54cbae0d3a67fa890b4c3d9ee162b7860315e341' 2025-12-04T10:14:50.8036055Z Submodule path 'third_party/googletest': checked out '52eb8108c5bdec04579160ae17225d66034bd723' 2025-12-04T10:14:50.8095747Z Submodule path 'third_party/ideep': checked out '719d8e6cd7f7a0e01b155657526d693acf97c2b3' 2025-12-04T10:14:50.8317298Z Submodule path 'third_party/ideep/mkl-dnn': checked out '8d263e693366ef8db40acc569cc7d8edf644556d' 2025-12-04T10:14:50.8404128Z Submodule path 'third_party/ittapi': checked out 'dec1d23ca65ab069d225dfe40dea14f455170959' 2025-12-04T10:14:50.8531428Z Submodule path 'third_party/kineto': checked out '31f85df8fbd89c188f14ef10f1ec65379786b943' 2025-12-04T10:14:50.8613165Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog': checked out 'd2ffe0a4e3acace628db49974246b66fc3e85fb1' 2025-12-04T10:14:50.8717128Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM': checked out 'ffde4e54bc7249a6039a5e6b45b395141e1217f9' 2025-12-04T10:14:50.8793373Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr': checked out '871ed52d350214a034f6ef8a3b8f51c5ce1bd400' 2025-12-04T10:14:50.8902153Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt': checked out 'cd4af11efc9c622896a3e4cb599fa28668ca3d05' 2025-12-04T10:14:50.8993216Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags': checked out 'e171aa2d15ed9eb17054558e0b3a6a413bb01067' 2025-12-04T10:14:50.9094545Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc': checked out '8411df715cf522606e3b1aca386ddfc0b63d34b4' 2025-12-04T10:14:50.9184530Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog': checked out 'b33e3bad4c46c8a6345525fd822af355e5ef9446' 2025-12-04T10:14:50.9308328Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest': checked out '52eb8108c5bdec04579160ae17225d66034bd723' 2025-12-04T10:14:50.9428438Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/json': checked out '4f8fba14066156b73f1189a2b8bd568bde5284c5' 2025-12-04T10:14:50.9514751Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs': checked out 'f68a2fa8ea36c783bdd760371411fcb495aa3150' 2025-12-04T10:14:50.9604726Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp': checked out 'b1234816facfdda29845c46696a02998a4af115a' 2025-12-04T10:14:50.9738995Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/civetweb': checked out 'd7ba35bbb649209c66e582d5a0244ba988a15159' 2025-12-04T10:14:50.9845132Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/googletest': checked out 'e2239ee6043f73722e7aa812a459f54a28552929' 2025-12-04T10:14:50.9944047Z Submodule path 'third_party/kineto/libkineto/third_party/fmt': checked out '40626af88bd7df9a5fb80be7b25ac85b122d6c21' 2025-12-04T10:14:51.0065613Z Submodule path 'third_party/kineto/libkineto/third_party/googletest': checked out '52eb8108c5bdec04579160ae17225d66034bd723' 2025-12-04T10:14:51.0179878Z Submodule path 'third_party/kleidiai': checked out 'd7770c89632329a9914ef1a90289917597639cbe' 2025-12-04T10:14:51.0297585Z Submodule path 'third_party/mimalloc': checked out 'fbd8b99c2b828428947d70fdc046bb55609be93e' 2025-12-04T10:14:51.0405096Z Submodule path 'third_party/nlohmann': checked out '55f93686c01528224f448c19128836e7df245f72' 2025-12-04T10:14:51.0589139Z Submodule path 'third_party/onnx': checked out 'e709452ef2bbc1d113faf678c24e6d3467696e83' 2025-12-04T10:14:51.0663087Z Submodule path 'third_party/onnx/third_party/pybind11': checked out 'a2e59f0e7065404b44dfe92a28aca47ba1378dc4' 2025-12-04T10:14:51.0786203Z Submodule path 'third_party/opentelemetry-cpp': checked out 'a799f4aed9c94b765dcdaabaeab7d5e7e2310878' 2025-12-04T10:14:51.0890818Z Submodule path 'third_party/opentelemetry-cpp/third_party/benchmark': checked out 'd572f4777349d43653b21d6c2fc63020ab326db2' 2025-12-04T10:14:51.0985034Z Submodule path 'third_party/opentelemetry-cpp/third_party/googletest': checked out 'b796f7d44681514f58a683a3a71ff17c94edb0c1' 2025-12-04T10:14:51.1076328Z Submodule path 'third_party/opentelemetry-cpp/third_party/ms-gsl': checked out '6f4529395c5b7c2d661812257cd6780c67e54afa' 2025-12-04T10:14:51.1205569Z Submodule path 'third_party/opentelemetry-cpp/third_party/nlohmann-json': checked out 'bc889afb4c5bf1c0d8ee29ef35eaaf4c8bef8a5d' 2025-12-04T10:14:51.1275223Z Submodule path 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto': checked out '4ca4f0335c63cda7ab31ea7ed70d6553aee14dce' 2025-12-04T10:14:51.1350187Z Submodule path 'third_party/opentelemetry-cpp/third_party/opentracing-cpp': checked out '06b57f48ded1fa3bdd3d4346f6ef29e40e08eaf5' 2025-12-04T10:14:51.1457445Z Submodule path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp': checked out 'c9ffcdda9086ffd9e1283ea7a0276d831f3c8a8d' 2025-12-04T10:14:51.1573280Z Submodule path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb': checked out 'eefb26f82b233268fc98577d265352720d477ba4' 2025-12-04T10:14:51.1657002Z Submodule path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest': checked out 'e2239ee6043f73722e7aa812a459f54a28552929' 2025-12-04T10:14:51.1861823Z Submodule path 'third_party/opentelemetry-cpp/tools/vcpkg': checked out '8eb57355a4ffb410a2e94c07b4dca2dffbee8e50' 2025-12-04T10:14:51.1975820Z Submodule path 'third_party/pocketfft': checked out '0fa0ef591e38c2758e3184c6c23e497b9f732ffa' 2025-12-04T10:14:51.2153333Z Submodule path 'third_party/protobuf': checked out 'd1eca4e4b421cd2997495c4b4e65cea6be4e9b8a' 2025-12-04T10:14:51.2263810Z Submodule path 'third_party/protobuf/third_party/benchmark': checked out '5b7683f49e1e9223cf9927b24f6fd3d6bd82e3f8' 2025-12-04T10:14:51.2356069Z Submodule path 'third_party/protobuf/third_party/googletest': checked out '5ec7f0c4a113e2f18ac2c6cc7df51ad6afc24081' 2025-12-04T10:14:51.2445055Z Submodule path 'third_party/psimd': checked out '072586a71b55b7f8c584153d223e95687148a900' 2025-12-04T10:14:51.2532065Z Submodule path 'third_party/pthreadpool': checked out '4fe0e1e183925bf8cfa6aae24237e724a96479b8' 2025-12-04T10:14:51.2626862Z Submodule path 'third_party/pybind11': checked out 'f5fbe867d2d26e4a0a9177a51f6e568868ad3dc8' 2025-12-04T10:14:51.2689160Z Submodule path 'third_party/python-peachpy': checked out 'f45429b087dd7d5bc78bb40dc7cf06425c252d67' 2025-12-04T10:14:51.2766411Z Submodule path 'third_party/sleef': checked out '5a1d179df9cf652951b59010a2d2075372d67f68' 2025-12-04T10:14:51.2861099Z Submodule path 'third_party/tensorpipe': checked out '2b4cd91092d335a697416b2a3cb398283246849d' 2025-12-04T10:14:51.2954962Z Submodule path 'third_party/tensorpipe/third_party/googletest': checked out 'aee0f9d9b5b87796ee8a0ab26b7587ec30e8858e' 2025-12-04T10:14:51.3021199Z Submodule path 'third_party/tensorpipe/third_party/libnop': checked out '910b55815be16109f04f4180e9adee14fb4ce281' 2025-12-04T10:14:51.3156019Z Submodule path 'third_party/tensorpipe/third_party/libuv': checked out '5152db2cbfeb5582e9c27c5ea1dba2cd9e10759b' 2025-12-04T10:14:51.3226203Z Submodule path 'third_party/tensorpipe/third_party/pybind11': checked out 'a23996fce38ff6ccfbcdc09f1e63f2c4be5ea2ef' 2025-12-04T10:14:51.3335457Z Submodule path 'third_party/tensorpipe/third_party/pybind11/tools/clang': checked out '6a00cbc4a9b8e68b71caf7f774b3f9c753ae84d5' 2025-12-04T10:14:51.3374252Z [command]/usr/bin/git submodule foreach --recursive git config --local gc.auto 0 2025-12-04T10:14:51.3623787Z Entering 'android/libs/fbjni' 2025-12-04T10:14:51.3670703Z Entering 'third_party/FP16' 2025-12-04T10:14:51.3718846Z Entering 'third_party/FXdiv' 2025-12-04T10:14:51.3767235Z Entering 'third_party/NNPACK' 2025-12-04T10:14:51.3828585Z Entering 'third_party/NVTX' 2025-12-04T10:14:51.3865732Z Entering 'third_party/VulkanMemoryAllocator' 2025-12-04T10:14:51.3898072Z Entering 'third_party/XNNPACK' 2025-12-04T10:14:51.3950728Z Entering 'third_party/aiter' 2025-12-04T10:14:51.3990498Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-12-04T10:14:51.4028288Z Entering 'third_party/benchmark' 2025-12-04T10:14:51.4052687Z Entering 'third_party/composable_kernel' 2025-12-04T10:14:51.4101698Z Entering 'third_party/cpp-httplib' 2025-12-04T10:14:51.4135298Z Entering 'third_party/cpuinfo' 2025-12-04T10:14:51.4168817Z Entering 'third_party/cudnn_frontend' 2025-12-04T10:14:51.4216626Z Entering 'third_party/cutlass' 2025-12-04T10:14:51.4247918Z Entering 'third_party/fbgemm' 2025-12-04T10:14:51.4277020Z Entering 'third_party/fbgemm/external/asmjit' 2025-12-04T10:14:51.4326122Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-12-04T10:14:51.4376576Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-12-04T10:14:51.4412083Z Entering 'third_party/fbgemm/external/cutlass' 2025-12-04T10:14:51.4466474Z Entering 'third_party/fbgemm/external/googletest' 2025-12-04T10:14:51.4501105Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-12-04T10:14:51.4541488Z Entering 'third_party/fbgemm/external/json' 2025-12-04T10:14:51.4582687Z Entering 'third_party/flash-attention' 2025-12-04T10:14:51.4619428Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-12-04T10:14:51.4654627Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-12-04T10:14:51.4701399Z Entering 'third_party/flatbuffers' 2025-12-04T10:14:51.4756307Z Entering 'third_party/fmt' 2025-12-04T10:14:51.4814445Z Entering 'third_party/gemmlowp/gemmlowp' 2025-12-04T10:14:51.4850892Z Entering 'third_party/gloo' 2025-12-04T10:14:51.4883814Z Entering 'third_party/googletest' 2025-12-04T10:14:51.4917318Z Entering 'third_party/ideep' 2025-12-04T10:14:51.4950945Z Entering 'third_party/ideep/mkl-dnn' 2025-12-04T10:14:51.4988460Z Entering 'third_party/ittapi' 2025-12-04T10:14:51.5023626Z Entering 'third_party/kineto' 2025-12-04T10:14:51.5050684Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-12-04T10:14:51.5092614Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-12-04T10:14:51.5136037Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-12-04T10:14:51.5176979Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-12-04T10:14:51.5215642Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-12-04T10:14:51.5249927Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-12-04T10:14:51.5289305Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-12-04T10:14:51.5321197Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-12-04T10:14:51.5353765Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-12-04T10:14:51.5390741Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-12-04T10:14:51.5423043Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp' 2025-12-04T10:14:51.5455036Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:51.5494770Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:51.5543060Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-12-04T10:14:51.5569796Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-12-04T10:14:51.5596329Z Entering 'third_party/kleidiai' 2025-12-04T10:14:51.5640064Z Entering 'third_party/mimalloc' 2025-12-04T10:14:51.5680865Z Entering 'third_party/nlohmann' 2025-12-04T10:14:51.5730908Z Entering 'third_party/onnx' 2025-12-04T10:14:51.5770126Z Entering 'third_party/onnx/third_party/pybind11' 2025-12-04T10:14:51.5792615Z Entering 'third_party/opentelemetry-cpp' 2025-12-04T10:14:51.5828339Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-12-04T10:14:51.5851546Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-12-04T10:14:51.5883316Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-12-04T10:14:51.5902444Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-12-04T10:14:51.5921996Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-12-04T10:14:51.5951995Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-12-04T10:14:51.5976224Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-12-04T10:14:51.5995067Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:51.6015481Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:51.6037556Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-12-04T10:14:51.6065079Z Entering 'third_party/pocketfft' 2025-12-04T10:14:51.6098216Z Entering 'third_party/protobuf' 2025-12-04T10:14:51.6138562Z Entering 'third_party/protobuf/third_party/benchmark' 2025-12-04T10:14:51.6165111Z Entering 'third_party/protobuf/third_party/googletest' 2025-12-04T10:14:51.6202928Z Entering 'third_party/psimd' 2025-12-04T10:14:51.6232857Z Entering 'third_party/pthreadpool' 2025-12-04T10:14:51.6268756Z Entering 'third_party/pybind11' 2025-12-04T10:14:51.6311793Z Entering 'third_party/python-peachpy' 2025-12-04T10:14:51.6362880Z Entering 'third_party/sleef' 2025-12-04T10:14:51.6399247Z Entering 'third_party/tensorpipe' 2025-12-04T10:14:51.6441225Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-12-04T10:14:51.6489398Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-12-04T10:14:51.6536530Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-12-04T10:14:51.6575844Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-12-04T10:14:51.6612794Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-12-04T10:14:51.6675960Z ##[endgroup] 2025-12-04T10:14:51.6676523Z ##[group]Persisting credentials for submodules 2025-12-04T10:14:51.6685984Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'url\.https\:\/\/github\.com\/\.insteadOf' && git config --local --unset-all 'url.https://github.com/.insteadOf' || :" 2025-12-04T10:14:51.6938153Z Entering 'android/libs/fbjni' 2025-12-04T10:14:51.6962473Z url.https://github.com/.insteadof 2025-12-04T10:14:51.6962963Z url.https://github.com/.insteadof 2025-12-04T10:14:51.6980512Z Entering 'third_party/FP16' 2025-12-04T10:14:51.7005628Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7006113Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7025439Z Entering 'third_party/FXdiv' 2025-12-04T10:14:51.7053403Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7053872Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7084854Z Entering 'third_party/NNPACK' 2025-12-04T10:14:51.7105836Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7106301Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7133643Z Entering 'third_party/NVTX' 2025-12-04T10:14:51.7153771Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7154231Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7173881Z Entering 'third_party/VulkanMemoryAllocator' 2025-12-04T10:14:51.7196803Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7197267Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7235354Z Entering 'third_party/XNNPACK' 2025-12-04T10:14:51.7249123Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7249584Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7274448Z Entering 'third_party/aiter' 2025-12-04T10:14:51.7293988Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7294458Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7325432Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-12-04T10:14:51.7349043Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7349511Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7374442Z Entering 'third_party/benchmark' 2025-12-04T10:14:51.7389415Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7389887Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7428549Z Entering 'third_party/composable_kernel' 2025-12-04T10:14:51.7455795Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7456266Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7478426Z Entering 'third_party/cpp-httplib' 2025-12-04T10:14:51.7517050Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7517542Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7556580Z Entering 'third_party/cpuinfo' 2025-12-04T10:14:51.7589517Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7589675Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7618253Z Entering 'third_party/cudnn_frontend' 2025-12-04T10:14:51.7636159Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7636629Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7666738Z Entering 'third_party/cutlass' 2025-12-04T10:14:51.7685070Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7685888Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7711343Z Entering 'third_party/fbgemm' 2025-12-04T10:14:51.7727647Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7728095Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7746949Z Entering 'third_party/fbgemm/external/asmjit' 2025-12-04T10:14:51.7774733Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7775191Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7805719Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-12-04T10:14:51.7823009Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7823448Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7847288Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-12-04T10:14:51.7881274Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7881733Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7921218Z Entering 'third_party/fbgemm/external/cutlass' 2025-12-04T10:14:51.7963148Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7963614Z url.https://github.com/.insteadof 2025-12-04T10:14:51.7988680Z Entering 'third_party/fbgemm/external/googletest' 2025-12-04T10:14:51.8007341Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8007740Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8036863Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-12-04T10:14:51.8056711Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8057180Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8075605Z Entering 'third_party/fbgemm/external/json' 2025-12-04T10:14:51.8096475Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8096949Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8127997Z Entering 'third_party/flash-attention' 2025-12-04T10:14:51.8148511Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8149000Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8189509Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-12-04T10:14:51.8206475Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8206936Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8234533Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-12-04T10:14:51.8259322Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8259786Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8295129Z Entering 'third_party/flatbuffers' 2025-12-04T10:14:51.8310375Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8310909Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8343307Z Entering 'third_party/fmt' 2025-12-04T10:14:51.8370704Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8388160Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8388849Z Entering 'third_party/gemmlowp/gemmlowp' 2025-12-04T10:14:51.8422078Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8422537Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8462525Z Entering 'third_party/gloo' 2025-12-04T10:14:51.8487288Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8487751Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8527775Z Entering 'third_party/googletest' 2025-12-04T10:14:51.8542367Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8542827Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8559736Z Entering 'third_party/ideep' 2025-12-04T10:14:51.8581857Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8582318Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8600434Z Entering 'third_party/ideep/mkl-dnn' 2025-12-04T10:14:51.8614712Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8615189Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8645835Z Entering 'third_party/ittapi' 2025-12-04T10:14:51.8670689Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8671164Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8690102Z Entering 'third_party/kineto' 2025-12-04T10:14:51.8703079Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8703554Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8721031Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-12-04T10:14:51.8734932Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8735611Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8751152Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-12-04T10:14:51.8780793Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8781262Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8812893Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-12-04T10:14:51.8826619Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8827095Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8869470Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-12-04T10:14:51.8901947Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8902375Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8934752Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-12-04T10:14:51.8962968Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8963437Z url.https://github.com/.insteadof 2025-12-04T10:14:51.8983909Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-12-04T10:14:51.9017608Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9018078Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9062221Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-12-04T10:14:51.9086339Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9086807Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9105759Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-12-04T10:14:51.9124487Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9124951Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9154472Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-12-04T10:14:51.9188839Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9189305Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9223335Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-12-04T10:14:51.9242696Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9243171Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9262293Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp' 2025-12-04T10:14:51.9274945Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9275417Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9305830Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:51.9317496Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9318176Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9337829Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:51.9356982Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9357403Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9398121Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-12-04T10:14:51.9398730Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9399175Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9425977Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-12-04T10:14:51.9442558Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9443024Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9473745Z Entering 'third_party/kleidiai' 2025-12-04T10:14:51.9488589Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9489188Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9519094Z Entering 'third_party/mimalloc' 2025-12-04T10:14:51.9540566Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9541119Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9570159Z Entering 'third_party/nlohmann' 2025-12-04T10:14:51.9592033Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9592511Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9611497Z Entering 'third_party/onnx' 2025-12-04T10:14:51.9638849Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9639320Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9679666Z Entering 'third_party/onnx/third_party/pybind11' 2025-12-04T10:14:51.9709900Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9710375Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9731481Z Entering 'third_party/opentelemetry-cpp' 2025-12-04T10:14:51.9750785Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9751256Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9782037Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-12-04T10:14:51.9801104Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9801603Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9825771Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-12-04T10:14:51.9846867Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9847351Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9863837Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-12-04T10:14:51.9876970Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9877466Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9905245Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-12-04T10:14:51.9929032Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9929511Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9960824Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-12-04T10:14:51.9979683Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9980168Z url.https://github.com/.insteadof 2025-12-04T10:14:51.9999131Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-12-04T10:14:52.0012323Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0013140Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0031799Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-12-04T10:14:52.0045182Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0045669Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0073703Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:52.0103279Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0103751Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0122935Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:52.0152658Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0153539Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0186813Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-12-04T10:14:52.0213769Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0214682Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0263302Z Entering 'third_party/pocketfft' 2025-12-04T10:14:52.0277869Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0278340Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0299099Z Entering 'third_party/protobuf' 2025-12-04T10:14:52.0313248Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0313648Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0332110Z Entering 'third_party/protobuf/third_party/benchmark' 2025-12-04T10:14:52.0346064Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0346466Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0376584Z Entering 'third_party/protobuf/third_party/googletest' 2025-12-04T10:14:52.0399668Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0400136Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0442232Z Entering 'third_party/psimd' 2025-12-04T10:14:52.0475296Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0475762Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0515071Z Entering 'third_party/pthreadpool' 2025-12-04T10:14:52.0530533Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0531062Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0547826Z Entering 'third_party/pybind11' 2025-12-04T10:14:52.0565528Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0565993Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0595446Z Entering 'third_party/python-peachpy' 2025-12-04T10:14:52.0614957Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0615445Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0644249Z Entering 'third_party/sleef' 2025-12-04T10:14:52.0658869Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0659343Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0687039Z Entering 'third_party/tensorpipe' 2025-12-04T10:14:52.0709182Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0709659Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0726410Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-12-04T10:14:52.0761838Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0762308Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0781213Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-12-04T10:14:52.0815044Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0815507Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0834553Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-12-04T10:14:52.0858609Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0859092Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0875462Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-12-04T10:14:52.0888334Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0888794Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0915726Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-12-04T10:14:52.0938422Z url.https://github.com/.insteadof 2025-12-04T10:14:52.0938889Z url.https://github.com/.insteadof 2025-12-04T10:14:52.1006865Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local 'http.https://github.com/.extraheader' 'AUTHORIZATION: basic ***' && git config --local --show-origin --name-only --get-regexp remote.origin.url" 2025-12-04T10:14:52.1226459Z Entering 'android/libs/fbjni' 2025-12-04T10:14:52.1271753Z file:/home/runner/_work/pytorch/pytorch/.git/modules/android/libs/fbjni/config remote.origin.url 2025-12-04T10:14:52.1285677Z Entering 'third_party/FP16' 2025-12-04T10:14:52.1307770Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FP16/config remote.origin.url 2025-12-04T10:14:52.1328496Z Entering 'third_party/FXdiv' 2025-12-04T10:14:52.1360160Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FXdiv/config remote.origin.url 2025-12-04T10:14:52.1371061Z Entering 'third_party/NNPACK' 2025-12-04T10:14:52.1414949Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK/config remote.origin.url 2025-12-04T10:14:52.1428624Z Entering 'third_party/NVTX' 2025-12-04T10:14:52.1469240Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NVTX/config remote.origin.url 2025-12-04T10:14:52.1481404Z Entering 'third_party/VulkanMemoryAllocator' 2025-12-04T10:14:52.1508900Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/VulkanMemoryAllocator/config remote.origin.url 2025-12-04T10:14:52.1519770Z Entering 'third_party/XNNPACK' 2025-12-04T10:14:52.1550285Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/XNNPACK/config remote.origin.url 2025-12-04T10:14:52.1584359Z Entering 'third_party/aiter' 2025-12-04T10:14:52.1610701Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/aiter/config remote.origin.url 2025-12-04T10:14:52.1620555Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-12-04T10:14:52.1645603Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/aiter/modules/3rdparty/composable_kernel/config remote.origin.url 2025-12-04T10:14:52.1661078Z Entering 'third_party/benchmark' 2025-12-04T10:14:52.1685698Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/benchmark/config remote.origin.url 2025-12-04T10:14:52.1697972Z Entering 'third_party/composable_kernel' 2025-12-04T10:14:52.1725015Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/composable_kernel/config remote.origin.url 2025-12-04T10:14:52.1754001Z Entering 'third_party/cpp-httplib' 2025-12-04T10:14:52.1776886Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/cpp-httplib/config remote.origin.url 2025-12-04T10:14:52.1786732Z Entering 'third_party/cpuinfo' 2025-12-04T10:14:52.1815395Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/cpuinfo/config remote.origin.url 2025-12-04T10:14:52.1825663Z Entering 'third_party/cudnn_frontend' 2025-12-04T10:14:52.1847707Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/cudnn_frontend/config remote.origin.url 2025-12-04T10:14:52.1869388Z Entering 'third_party/cutlass' 2025-12-04T10:14:52.1921317Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/cutlass/config remote.origin.url 2025-12-04T10:14:52.1951666Z Entering 'third_party/fbgemm' 2025-12-04T10:14:52.1988688Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/config remote.origin.url 2025-12-04T10:14:52.1999743Z Entering 'third_party/fbgemm/external/asmjit' 2025-12-04T10:14:52.2041621Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/asmjit/config remote.origin.url 2025-12-04T10:14:52.2053158Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-12-04T10:14:52.2106671Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/composable_kernel/config remote.origin.url 2025-12-04T10:14:52.2120865Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-12-04T10:14:52.2148819Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/cpuinfo/config remote.origin.url 2025-12-04T10:14:52.2159444Z Entering 'third_party/fbgemm/external/cutlass' 2025-12-04T10:14:52.2202511Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/cutlass/config remote.origin.url 2025-12-04T10:14:52.2216023Z Entering 'third_party/fbgemm/external/googletest' 2025-12-04T10:14:52.2242071Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/googletest/config remote.origin.url 2025-12-04T10:14:52.2252247Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-12-04T10:14:52.2291498Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/hipify_torch/config remote.origin.url 2025-12-04T10:14:52.2311206Z Entering 'third_party/fbgemm/external/json' 2025-12-04T10:14:52.2353802Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/json/config remote.origin.url 2025-12-04T10:14:52.2384055Z Entering 'third_party/flash-attention' 2025-12-04T10:14:52.2427136Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/config remote.origin.url 2025-12-04T10:14:52.2440291Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-12-04T10:14:52.2465624Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/modules/csrc/composable_kernel/config remote.origin.url 2025-12-04T10:14:52.2475542Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-12-04T10:14:52.2515394Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/modules/csrc/cutlass/config remote.origin.url 2025-12-04T10:14:52.2531419Z Entering 'third_party/flatbuffers' 2025-12-04T10:14:52.2552031Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/flatbuffers/config remote.origin.url 2025-12-04T10:14:52.2576170Z Entering 'third_party/fmt' 2025-12-04T10:14:52.2605877Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fmt/config remote.origin.url 2025-12-04T10:14:52.2616911Z Entering 'third_party/gemmlowp/gemmlowp' 2025-12-04T10:14:52.2660110Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/gemmlowp/gemmlowp/config remote.origin.url 2025-12-04T10:14:52.2671201Z Entering 'third_party/gloo' 2025-12-04T10:14:52.2717759Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/gloo/config remote.origin.url 2025-12-04T10:14:52.2742046Z Entering 'third_party/googletest' 2025-12-04T10:14:52.2784391Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/googletest/config remote.origin.url 2025-12-04T10:14:52.2795800Z Entering 'third_party/ideep' 2025-12-04T10:14:52.2826466Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/config remote.origin.url 2025-12-04T10:14:52.2836041Z Entering 'third_party/ideep/mkl-dnn' 2025-12-04T10:14:52.2889376Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/modules/mkl-dnn/config remote.origin.url 2025-12-04T10:14:52.2904290Z Entering 'third_party/ittapi' 2025-12-04T10:14:52.2962276Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/ittapi/config remote.origin.url 2025-12-04T10:14:52.2971697Z Entering 'third_party/kineto' 2025-12-04T10:14:52.3009484Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/config remote.origin.url 2025-12-04T10:14:52.3031933Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-12-04T10:14:52.3054170Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/config remote.origin.url 2025-12-04T10:14:52.3063673Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-12-04T10:14:52.3084105Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/DCGM/config remote.origin.url 2025-12-04T10:14:52.3105799Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-12-04T10:14:52.3148295Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/cpr/config remote.origin.url 2025-12-04T10:14:52.3159574Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-12-04T10:14:52.3190711Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/fmt/config remote.origin.url 2025-12-04T10:14:52.3201747Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-12-04T10:14:52.3236030Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/config remote.origin.url 2025-12-04T10:14:52.3247533Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-12-04T10:14:52.3291660Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/modules/doc/config remote.origin.url 2025-12-04T10:14:52.3305015Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-12-04T10:14:52.3351908Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/glog/config remote.origin.url 2025-12-04T10:14:52.3361325Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-12-04T10:14:52.3415582Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/googletest/config remote.origin.url 2025-12-04T10:14:52.3425209Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-12-04T10:14:52.3468989Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/json/config remote.origin.url 2025-12-04T10:14:52.3478754Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-12-04T10:14:52.3524093Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/pfs/config remote.origin.url 2025-12-04T10:14:52.3534321Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp' 2025-12-04T10:14:52.3558513Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/prometheus-cpp/config remote.origin.url 2025-12-04T10:14:52.3566136Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:52.3599630Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/prometheus-cpp/modules/civetweb/config remote.origin.url 2025-12-04T10:14:52.3610690Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:52.3639464Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/prometheus-cpp/modules/googletest/config remote.origin.url 2025-12-04T10:14:52.3664932Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-12-04T10:14:52.3696307Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/fmt/config remote.origin.url 2025-12-04T10:14:52.3717496Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-12-04T10:14:52.3759076Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/googletest/config remote.origin.url 2025-12-04T10:14:52.3772613Z Entering 'third_party/kleidiai' 2025-12-04T10:14:52.3802792Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kleidiai/config remote.origin.url 2025-12-04T10:14:52.3826813Z Entering 'third_party/mimalloc' 2025-12-04T10:14:52.3861763Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/mimalloc/config remote.origin.url 2025-12-04T10:14:52.3874269Z Entering 'third_party/nlohmann' 2025-12-04T10:14:52.3904657Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/nlohmann/config remote.origin.url 2025-12-04T10:14:52.3915424Z Entering 'third_party/onnx' 2025-12-04T10:14:52.3934954Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/config remote.origin.url 2025-12-04T10:14:52.3967067Z Entering 'third_party/onnx/third_party/pybind11' 2025-12-04T10:14:52.3995046Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/modules/third_party/pybind11/config remote.origin.url 2025-12-04T10:14:52.4019482Z Entering 'third_party/opentelemetry-cpp' 2025-12-04T10:14:52.4050942Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/config remote.origin.url 2025-12-04T10:14:52.4062382Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-12-04T10:14:52.4084327Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/benchmark/config remote.origin.url 2025-12-04T10:14:52.4105640Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-12-04T10:14:52.4137260Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/googletest/config remote.origin.url 2025-12-04T10:14:52.4157657Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-12-04T10:14:52.4196831Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/ms-gsl/config remote.origin.url 2025-12-04T10:14:52.4207724Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-12-04T10:14:52.4247713Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/nlohmann-json/config remote.origin.url 2025-12-04T10:14:52.4259159Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-12-04T10:14:52.4294804Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/opentelemetry-proto/config remote.origin.url 2025-12-04T10:14:52.4304308Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-12-04T10:14:52.4340236Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/opentracing-cpp/config remote.origin.url 2025-12-04T10:14:52.4349632Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-12-04T10:14:52.4368288Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/config remote.origin.url 2025-12-04T10:14:52.4376686Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:52.4427059Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/modules/civetweb/config remote.origin.url 2025-12-04T10:14:52.4438773Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:52.4488883Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/modules/googletest/config remote.origin.url 2025-12-04T10:14:52.4503928Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-12-04T10:14:52.4530253Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/tools/vcpkg/config remote.origin.url 2025-12-04T10:14:52.4547622Z Entering 'third_party/pocketfft' 2025-12-04T10:14:52.4569615Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/pocketfft/config remote.origin.url 2025-12-04T10:14:52.4583893Z Entering 'third_party/protobuf' 2025-12-04T10:14:52.4623220Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/config remote.origin.url 2025-12-04T10:14:52.4635722Z Entering 'third_party/protobuf/third_party/benchmark' 2025-12-04T10:14:52.4667857Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/benchmark/config remote.origin.url 2025-12-04T10:14:52.4677593Z Entering 'third_party/protobuf/third_party/googletest' 2025-12-04T10:14:52.4709445Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/googletest/config remote.origin.url 2025-12-04T10:14:52.4723296Z Entering 'third_party/psimd' 2025-12-04T10:14:52.4754391Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/psimd/config remote.origin.url 2025-12-04T10:14:52.4766011Z Entering 'third_party/pthreadpool' 2025-12-04T10:14:52.4785767Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/pthreadpool/config remote.origin.url 2025-12-04T10:14:52.4795812Z Entering 'third_party/pybind11' 2025-12-04T10:14:52.4820522Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/pybind11/config remote.origin.url 2025-12-04T10:14:52.4831737Z Entering 'third_party/python-peachpy' 2025-12-04T10:14:52.4856852Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/python-peachpy/config remote.origin.url 2025-12-04T10:14:52.4866633Z Entering 'third_party/sleef' 2025-12-04T10:14:52.4895624Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/sleef/config remote.origin.url 2025-12-04T10:14:52.4906389Z Entering 'third_party/tensorpipe' 2025-12-04T10:14:52.4944676Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/config remote.origin.url 2025-12-04T10:14:52.4966549Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-12-04T10:14:52.4985929Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/googletest/config remote.origin.url 2025-12-04T10:14:52.4998381Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-12-04T10:14:52.5050004Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libnop/config remote.origin.url 2025-12-04T10:14:52.5060523Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-12-04T10:14:52.5083784Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libuv/config remote.origin.url 2025-12-04T10:14:52.5093501Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-12-04T10:14:52.5131704Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/config remote.origin.url 2025-12-04T10:14:52.5143523Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-12-04T10:14:52.5180553Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/modules/tools/clang/config remote.origin.url 2025-12-04T10:14:52.5590165Z [command]/usr/bin/git submodule foreach --recursive git config --local --add 'url.https://github.com/.insteadOf' 'git@github.com:' 2025-12-04T10:14:52.5908465Z Entering 'android/libs/fbjni' 2025-12-04T10:14:52.5954329Z Entering 'third_party/FP16' 2025-12-04T10:14:52.6004678Z Entering 'third_party/FXdiv' 2025-12-04T10:14:52.6037485Z Entering 'third_party/NNPACK' 2025-12-04T10:14:52.6081719Z Entering 'third_party/NVTX' 2025-12-04T10:14:52.6108878Z Entering 'third_party/VulkanMemoryAllocator' 2025-12-04T10:14:52.6130012Z Entering 'third_party/XNNPACK' 2025-12-04T10:14:52.6158054Z Entering 'third_party/aiter' 2025-12-04T10:14:52.6178344Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-12-04T10:14:52.6219283Z Entering 'third_party/benchmark' 2025-12-04T10:14:52.6257662Z Entering 'third_party/composable_kernel' 2025-12-04T10:14:52.6316887Z Entering 'third_party/cpp-httplib' 2025-12-04T10:14:52.6350231Z Entering 'third_party/cpuinfo' 2025-12-04T10:14:52.6372874Z Entering 'third_party/cudnn_frontend' 2025-12-04T10:14:52.6393735Z Entering 'third_party/cutlass' 2025-12-04T10:14:52.6417689Z Entering 'third_party/fbgemm' 2025-12-04T10:14:52.6440358Z Entering 'third_party/fbgemm/external/asmjit' 2025-12-04T10:14:52.6460100Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-12-04T10:14:52.6488160Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-12-04T10:14:52.6520461Z Entering 'third_party/fbgemm/external/cutlass' 2025-12-04T10:14:52.6562696Z Entering 'third_party/fbgemm/external/googletest' 2025-12-04T10:14:52.6582608Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-12-04T10:14:52.6608603Z Entering 'third_party/fbgemm/external/json' 2025-12-04T10:14:52.6629899Z Entering 'third_party/flash-attention' 2025-12-04T10:14:52.6652762Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-12-04T10:14:52.6675272Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-12-04T10:14:52.6714174Z Entering 'third_party/flatbuffers' 2025-12-04T10:14:52.6735421Z Entering 'third_party/fmt' 2025-12-04T10:14:52.6754983Z Entering 'third_party/gemmlowp/gemmlowp' 2025-12-04T10:14:52.6774776Z Entering 'third_party/gloo' 2025-12-04T10:14:52.6802283Z Entering 'third_party/googletest' 2025-12-04T10:14:52.6823331Z Entering 'third_party/ideep' 2025-12-04T10:14:52.6844404Z Entering 'third_party/ideep/mkl-dnn' 2025-12-04T10:14:52.6883618Z Entering 'third_party/ittapi' 2025-12-04T10:14:52.6904434Z Entering 'third_party/kineto' 2025-12-04T10:14:52.6927470Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-12-04T10:14:52.6959734Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-12-04T10:14:52.6997470Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-12-04T10:14:52.7031317Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-12-04T10:14:52.7051148Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-12-04T10:14:52.7069613Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-12-04T10:14:52.7089373Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-12-04T10:14:52.7109534Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-12-04T10:14:52.7130142Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-12-04T10:14:52.7155595Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-12-04T10:14:52.7194092Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp' 2025-12-04T10:14:52.7224571Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:52.7244276Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:52.7268280Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-12-04T10:14:52.7289246Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-12-04T10:14:52.7316673Z Entering 'third_party/kleidiai' 2025-12-04T10:14:52.7363807Z Entering 'third_party/mimalloc' 2025-12-04T10:14:52.7407182Z Entering 'third_party/nlohmann' 2025-12-04T10:14:52.7445119Z Entering 'third_party/onnx' 2025-12-04T10:14:52.7500758Z Entering 'third_party/onnx/third_party/pybind11' 2025-12-04T10:14:52.7541297Z Entering 'third_party/opentelemetry-cpp' 2025-12-04T10:14:52.7585109Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-12-04T10:14:52.7610289Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-12-04T10:14:52.7635595Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-12-04T10:14:52.7668905Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-12-04T10:14:52.7702177Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-12-04T10:14:52.7749152Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-12-04T10:14:52.7769716Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-12-04T10:14:52.7797188Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:52.7840942Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:52.7863491Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-12-04T10:14:52.7910151Z Entering 'third_party/pocketfft' 2025-12-04T10:14:52.7951435Z Entering 'third_party/protobuf' 2025-12-04T10:14:52.7982534Z Entering 'third_party/protobuf/third_party/benchmark' 2025-12-04T10:14:52.8012992Z Entering 'third_party/protobuf/third_party/googletest' 2025-12-04T10:14:52.8039825Z Entering 'third_party/psimd' 2025-12-04T10:14:52.8071584Z Entering 'third_party/pthreadpool' 2025-12-04T10:14:52.8104108Z Entering 'third_party/pybind11' 2025-12-04T10:14:52.8137434Z Entering 'third_party/python-peachpy' 2025-12-04T10:14:52.8176840Z Entering 'third_party/sleef' 2025-12-04T10:14:52.8204068Z Entering 'third_party/tensorpipe' 2025-12-04T10:14:52.8233046Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-12-04T10:14:52.8276882Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-12-04T10:14:52.8324925Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-12-04T10:14:52.8346201Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-12-04T10:14:52.8375206Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-12-04T10:14:52.8422161Z [command]/usr/bin/git submodule foreach --recursive git config --local --add 'url.https://github.com/.insteadOf' 'org-21003710@github.com:' 2025-12-04T10:14:52.8630907Z Entering 'android/libs/fbjni' 2025-12-04T10:14:52.8663463Z Entering 'third_party/FP16' 2025-12-04T10:14:52.8689043Z Entering 'third_party/FXdiv' 2025-12-04T10:14:52.8730408Z Entering 'third_party/NNPACK' 2025-12-04T10:14:52.8751171Z Entering 'third_party/NVTX' 2025-12-04T10:14:52.8788127Z Entering 'third_party/VulkanMemoryAllocator' 2025-12-04T10:14:52.8809644Z Entering 'third_party/XNNPACK' 2025-12-04T10:14:52.8834514Z Entering 'third_party/aiter' 2025-12-04T10:14:52.8855034Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-12-04T10:14:52.8880476Z Entering 'third_party/benchmark' 2025-12-04T10:14:52.8920794Z Entering 'third_party/composable_kernel' 2025-12-04T10:14:52.8945638Z Entering 'third_party/cpp-httplib' 2025-12-04T10:14:52.8983239Z Entering 'third_party/cpuinfo' 2025-12-04T10:14:52.9009184Z Entering 'third_party/cudnn_frontend' 2025-12-04T10:14:52.9050197Z Entering 'third_party/cutlass' 2025-12-04T10:14:52.9086381Z Entering 'third_party/fbgemm' 2025-12-04T10:14:52.9110801Z Entering 'third_party/fbgemm/external/asmjit' 2025-12-04T10:14:52.9135927Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-12-04T10:14:52.9159521Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-12-04T10:14:52.9178839Z Entering 'third_party/fbgemm/external/cutlass' 2025-12-04T10:14:52.9201817Z Entering 'third_party/fbgemm/external/googletest' 2025-12-04T10:14:52.9222678Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-12-04T10:14:52.9257022Z Entering 'third_party/fbgemm/external/json' 2025-12-04T10:14:52.9278145Z Entering 'third_party/flash-attention' 2025-12-04T10:14:52.9299896Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-12-04T10:14:52.9326320Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-12-04T10:14:52.9354569Z Entering 'third_party/flatbuffers' 2025-12-04T10:14:52.9377114Z Entering 'third_party/fmt' 2025-12-04T10:14:52.9400966Z Entering 'third_party/gemmlowp/gemmlowp' 2025-12-04T10:14:52.9434554Z Entering 'third_party/gloo' 2025-12-04T10:14:52.9462364Z Entering 'third_party/googletest' 2025-12-04T10:14:52.9483577Z Entering 'third_party/ideep' 2025-12-04T10:14:52.9514739Z Entering 'third_party/ideep/mkl-dnn' 2025-12-04T10:14:52.9540881Z Entering 'third_party/ittapi' 2025-12-04T10:14:52.9579245Z Entering 'third_party/kineto' 2025-12-04T10:14:52.9601460Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-12-04T10:14:52.9642161Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-12-04T10:14:52.9659629Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-12-04T10:14:52.9679139Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-12-04T10:14:52.9712227Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-12-04T10:14:52.9732071Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-12-04T10:14:52.9752961Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-12-04T10:14:52.9771151Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-12-04T10:14:52.9790755Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-12-04T10:14:52.9810072Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-12-04T10:14:52.9828904Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp' 2025-12-04T10:14:52.9865252Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:52.9894725Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:52.9919379Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-12-04T10:14:52.9950044Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-12-04T10:14:52.9994547Z Entering 'third_party/kleidiai' 2025-12-04T10:14:53.0022141Z Entering 'third_party/mimalloc' 2025-12-04T10:14:53.0044812Z Entering 'third_party/nlohmann' 2025-12-04T10:14:53.0066829Z Entering 'third_party/onnx' 2025-12-04T10:14:53.0107580Z Entering 'third_party/onnx/third_party/pybind11' 2025-12-04T10:14:53.0131247Z Entering 'third_party/opentelemetry-cpp' 2025-12-04T10:14:53.0152236Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-12-04T10:14:53.0171128Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-12-04T10:14:53.0204049Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-12-04T10:14:53.0236034Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-12-04T10:14:53.0258824Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-12-04T10:14:53.0285054Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-12-04T10:14:53.0315419Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-12-04T10:14:53.0334117Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T10:14:53.0383143Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T10:14:53.0422539Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-12-04T10:14:53.0448780Z Entering 'third_party/pocketfft' 2025-12-04T10:14:53.0476669Z Entering 'third_party/protobuf' 2025-12-04T10:14:53.0513160Z Entering 'third_party/protobuf/third_party/benchmark' 2025-12-04T10:14:53.0544797Z Entering 'third_party/protobuf/third_party/googletest' 2025-12-04T10:14:53.0574947Z Entering 'third_party/psimd' 2025-12-04T10:14:53.0606051Z Entering 'third_party/pthreadpool' 2025-12-04T10:14:53.0644610Z Entering 'third_party/pybind11' 2025-12-04T10:14:53.0673230Z Entering 'third_party/python-peachpy' 2025-12-04T10:14:53.0701958Z Entering 'third_party/sleef' 2025-12-04T10:14:53.0729262Z Entering 'third_party/tensorpipe' 2025-12-04T10:14:53.0751143Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-12-04T10:14:53.0772310Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-12-04T10:14:53.0794370Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-12-04T10:14:53.0828573Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-12-04T10:14:53.0856331Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-12-04T10:14:53.0910490Z ##[endgroup] 2025-12-04T10:14:53.1083636Z [command]/usr/bin/git log -1 --format=%H 2025-12-04T10:14:53.1204833Z ffd9b0fb4355e97af82fc42cf185c3ffa0fc0a32 2025-12-04T10:14:53.1471891Z Prepare all required actions 2025-12-04T10:14:53.1472589Z Getting action download info 2025-12-04T10:14:53.4257295Z Download action repository 'aws-actions/amazon-ecr-login@062b18b96a7aff071d4dc91bc00c4c1a7945b076' (SHA:062b18b96a7aff071d4dc91bc00c4c1a7945b076) 2025-12-04T10:14:54.4078118Z ##[group]Run ./.github/actions/setup-rocm 2025-12-04T10:14:54.4078524Z env: 2025-12-04T10:14:54.4078801Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:14:54.4079142Z ##[endgroup] 2025-12-04T10:14:54.4114083Z ##[group]Run dpkg -l | grep -E " rocm" 2025-12-04T10:14:54.4114525Z dpkg -l | grep -E " rocm" 2025-12-04T10:14:54.4121731Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-12-04T10:14:54.4122018Z env: 2025-12-04T10:14:54.4122186Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:14:54.4122386Z ##[endgroup] 2025-12-04T10:14:54.4223255Z ii rocm-cmake 0.14.0.60401-83~22.04 amd64 rocm-cmake built using CMake 2025-12-04T10:14:54.4223741Z ii rocm-core 6.4.1.60401-83~22.04 amd64 ROCm Runtime software stack 2025-12-04T10:14:54.4224182Z ii rocm-dbgapi 0.77.2.60401-83~22.04 amd64 Library to provide AMD GPU debugger API 2025-12-04T10:14:54.4224689Z ii rocm-debug-agent 2.0.4.60401-83~22.04 amd64 Radeon Open Compute Debug Agent (ROCdebug-agent) 2025-12-04T10:14:54.4225410Z ii rocm-dev 6.4.1.60401-83~22.04 amd64 Radeon Open Compute (ROCm) Runtime software stack 2025-12-04T10:14:54.4225893Z ii rocm-device-libs 1.0.0.60401-83~22.04 amd64 Radeon Open Compute - device libraries 2025-12-04T10:14:54.4226315Z ii rocm-gdb 15.2.60401-83~22.04 amd64 ROCgdb 2025-12-04T10:14:54.4226958Z ii rocm-llvm 19.0.0.25184.60401-83~22.04 amd64 ROCm core compiler 2025-12-04T10:14:54.4227514Z ii rocm-opencl 2.0.0.60401-83~22.04 amd64 clr built using CMake 2025-12-04T10:14:54.4227951Z ii rocm-opencl-dev 2.0.0.60401-83~22.04 amd64 clr built using CMake 2025-12-04T10:14:54.4228404Z ii rocm-smi-lib 7.5.0.60401-83~22.04 amd64 AMD System Management libraries 2025-12-04T10:14:54.4228866Z ii rocm-utils 6.4.1.60401-83~22.04 amd64 Radeon Open Compute (ROCm) Runtime software stack 2025-12-04T10:14:54.4229360Z ii rocminfo 1.0.0.60401-83~22.04 amd64 Radeon Open Compute (ROCm) Runtime rocminfo tool 2025-12-04T10:14:54.4243831Z ##[group]Run # ignore expansion of "docker ps -q" since it could be empty 2025-12-04T10:14:54.4244297Z # ignore expansion of "docker ps -q" since it could be empty 2025-12-04T10:14:54.4244621Z # shellcheck disable=SC2046 2025-12-04T10:14:54.4244899Z docker stop $(docker ps -q) || true 2025-12-04T10:14:54.4245166Z # Prune all stopped containers. 2025-12-04T10:14:54.4245425Z docker container prune -f 2025-12-04T10:14:54.4250309Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-12-04T10:14:54.4250594Z env: 2025-12-04T10:14:54.4250824Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:14:54.4251033Z ##[endgroup] 2025-12-04T10:14:54.4522656Z docker: 'docker stop' requires at least 1 argument 2025-12-04T10:14:54.4523087Z 2025-12-04T10:14:54.4523314Z Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...] 2025-12-04T10:14:54.4523642Z 2025-12-04T10:14:54.4523836Z See 'docker stop --help' for more information 2025-12-04T10:14:54.4602167Z Total reclaimed space: 0B 2025-12-04T10:14:54.4633272Z ##[group]Run cat /etc/os-release || true 2025-12-04T10:14:54.4633598Z cat /etc/os-release || true 2025-12-04T10:14:54.4633882Z cat /etc/apt/sources.list.d/rocm.list || true 2025-12-04T10:14:54.4634376Z cat /opt/rocm/.info/version || true 2025-12-04T10:14:54.4634611Z whoami 2025-12-04T10:14:54.4641205Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-12-04T10:14:54.4641499Z env: 2025-12-04T10:14:54.4641669Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:14:54.4641869Z ##[endgroup] 2025-12-04T10:14:54.4676825Z PRETTY_NAME="Ubuntu 22.04.5 LTS" 2025-12-04T10:14:54.4677060Z NAME="Ubuntu" 2025-12-04T10:14:54.4677268Z VERSION_ID="22.04" 2025-12-04T10:14:54.4677487Z VERSION="22.04.5 LTS (Jammy Jellyfish)" 2025-12-04T10:14:54.4677721Z VERSION_CODENAME=jammy 2025-12-04T10:14:54.4677913Z ID=ubuntu 2025-12-04T10:14:54.4678067Z ID_LIKE=debian 2025-12-04T10:14:54.4678297Z HOME_URL="https://www.ubuntu.com/" 2025-12-04T10:14:54.4678563Z SUPPORT_URL="https://help.ubuntu.com/" 2025-12-04T10:14:54.4678867Z BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2025-12-04T10:14:54.4679289Z PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2025-12-04T10:14:54.4679664Z UBUNTU_CODENAME=jammy 2025-12-04T10:14:54.4686305Z deb [arch=amd64 signed-by=/etc/apt/keyrings/rocm.gpg] https://repo.radeon.com/rocm/apt/6.4.1 jammy main 2025-12-04T10:14:54.4692714Z 6.4.1-83 2025-12-04T10:14:54.4702939Z runner 2025-12-04T10:14:54.4733013Z ##[group]Run dpkg -l | grep -E " amdgpu" 2025-12-04T10:14:54.4733509Z dpkg -l | grep -E " amdgpu" 2025-12-04T10:14:54.4743967Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-12-04T10:14:54.4744663Z env: 2025-12-04T10:14:54.4744938Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:14:54.4745262Z ##[endgroup] 2025-12-04T10:14:54.4815661Z ii amdgpu-core 1:6.4.60401-2164967.22.04 all Core meta package for unified amdgpu driver. 2025-12-04T10:14:54.4816497Z ii amdgpu-install 6.4.60401-2164967.22.04 all AMDGPU driver repository and installer 2025-12-04T10:14:54.4846441Z ##[group]Run rocm-smi 2025-12-04T10:14:54.4846805Z rocm-smi 2025-12-04T10:14:54.4856860Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-12-04T10:14:54.4857315Z env: 2025-12-04T10:14:54.4857595Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:14:54.4857910Z ##[endgroup] 2025-12-04T10:14:54.5534985Z 2025-12-04T10:14:54.5535278Z 2025-12-04T10:14:54.5535777Z =========================================== ROCm System Management Interface =========================================== 2025-12-04T10:14:54.5536512Z ===================================================== Concise Info ===================================================== 2025-12-04T10:14:54.5537258Z Device Node IDs Temp Power Partitions SCLK MCLK Fan Perf PwrCap VRAM% GPU% 2025-12-04T10:14:54.5538527Z  (DID, GUID) (Junction) (Socket) (Mem, Compute, ID)  2025-12-04T10:14:54.5539137Z ======================================================================================================================== 2025-12-04T10:14:54.5540281Z 0 3 0x74a5, 51110 28.0°C 126.0W NPS1, SPX, 0 N/A 900Mhz 0% auto 1000.0W 0% 0% 2025-12-04T10:14:54.5541255Z 1 5 0x74a5, 2987 28.0°C 129.0W NPS1, SPX, 0 N/A 900Mhz 0% auto 1000.0W 0% 0% 2025-12-04T10:14:54.5542071Z 2 4 0x74a5, 61326 27.0°C 116.0W NPS1, SPX, 0 N/A 900Mhz 0% auto 1000.0W 0% 0% 2025-12-04T10:14:54.5542896Z 3 2 0x74a5, 9091 27.0°C 126.0W NPS1, SPX, 0 N/A 900Mhz 0% auto 1000.0W 0% 0% 2025-12-04T10:14:54.5543450Z ======================================================================================================================== 2025-12-04T10:14:54.5543961Z ================================================= End of ROCm SMI Log ================================================== 2025-12-04T10:14:54.5611913Z ##[group]Run rocminfo 2025-12-04T10:14:54.5612265Z rocminfo 2025-12-04T10:14:54.5622113Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-12-04T10:14:54.5622578Z env: 2025-12-04T10:14:54.5622848Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:14:54.5623164Z ##[endgroup] 2025-12-04T10:14:54.6556126Z ROCk module version 6.12.12 is loaded 2025-12-04T10:14:54.6556705Z ===================== 2025-12-04T10:14:54.6557061Z HSA System Attributes 2025-12-04T10:14:54.6557364Z ===================== 2025-12-04T10:14:54.6557763Z Runtime Version: 1.15 2025-12-04T10:14:54.6558105Z Runtime Ext Version: 1.7 2025-12-04T10:14:54.6558474Z System Timestamp Freq.: 1000.000000MHz 2025-12-04T10:14:54.6559070Z Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count) 2025-12-04T10:14:54.6559702Z Machine Model: LARGE 2025-12-04T10:14:54.6560260Z System Endianness: LITTLE 2025-12-04T10:14:54.6560765Z Mwaitx: DISABLED 2025-12-04T10:14:54.6561128Z XNACK enabled: NO 2025-12-04T10:14:54.6561475Z DMAbuf Support: YES 2025-12-04T10:14:54.6561806Z VMM Support: YES 2025-12-04T10:14:54.6562025Z 2025-12-04T10:14:54.6562141Z ========== 2025-12-04T10:14:54.6562524Z HSA Agents 2025-12-04T10:14:54.6562820Z ========== 2025-12-04T10:14:54.6563110Z ******* 2025-12-04T10:14:54.6563402Z Agent 1 2025-12-04T10:14:54.6564164Z ******* 2025-12-04T10:14:54.6564571Z Name: AMD EPYC 9575F 64-Core Processor 2025-12-04T10:14:54.6565073Z Uuid: CPU-XX 2025-12-04T10:14:54.6565560Z Marketing Name: AMD EPYC 9575F 64-Core Processor 2025-12-04T10:14:54.6566112Z Vendor Name: CPU 2025-12-04T10:14:54.6566588Z Feature: None specified 2025-12-04T10:14:54.6567068Z Profile: FULL_PROFILE 2025-12-04T10:14:54.6567546Z Float Round Mode: NEAR 2025-12-04T10:14:54.6568034Z Max Queue Number: 0(0x0) 2025-12-04T10:14:54.6568511Z Queue Min Size: 0(0x0) 2025-12-04T10:14:54.6568977Z Queue Max Size: 0(0x0) 2025-12-04T10:14:54.6569442Z Queue Type: MULTI 2025-12-04T10:14:54.6569892Z Node: 0 2025-12-04T10:14:54.6570332Z Device Type: CPU 2025-12-04T10:14:54.6570803Z Cache Info: 2025-12-04T10:14:54.6571166Z L1: 49152(0xc000) KB 2025-12-04T10:14:54.6571601Z Chip ID: 0(0x0) 2025-12-04T10:14:54.6572051Z ASIC Revision: 0(0x0) 2025-12-04T10:14:54.6572527Z Cacheline Size: 64(0x40) 2025-12-04T10:14:54.6573007Z Max Clock Freq. (MHz): 3300 2025-12-04T10:14:54.6573459Z BDFID: 0 2025-12-04T10:14:54.6573914Z Internal Node ID: 0 2025-12-04T10:14:54.6574385Z Compute Unit: 64 2025-12-04T10:14:54.6574849Z SIMDs per CU: 0 2025-12-04T10:14:54.6575322Z Shader Engines: 0 2025-12-04T10:14:54.6575811Z Shader Arrs. per Eng.: 0 2025-12-04T10:14:54.6576310Z WatchPts on Addr. Ranges:1 2025-12-04T10:14:54.6576753Z Memory Properties: 2025-12-04T10:14:54.6577089Z Features: None 2025-12-04T10:14:54.6577418Z Pool Info: 2025-12-04T10:14:54.6577892Z Pool 1 2025-12-04T10:14:54.6578306Z Segment: GLOBAL; FLAGS: FINE GRAINED 2025-12-04T10:14:54.6578788Z Size: 1584733356(0x5e751cac) KB 2025-12-04T10:14:54.6579259Z Allocatable: TRUE 2025-12-04T10:14:54.6579745Z Alloc Granule: 4KB 2025-12-04T10:14:54.6580260Z Alloc Recommended Granule:4KB 2025-12-04T10:14:54.6580856Z Alloc Alignment: 4KB 2025-12-04T10:14:54.6581354Z Accessible by all: TRUE 2025-12-04T10:14:54.6581783Z Pool 2 2025-12-04T10:14:54.6582186Z Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED 2025-12-04T10:14:54.6582656Z Size: 1584733356(0x5e751cac) KB 2025-12-04T10:14:54.6583112Z Allocatable: TRUE 2025-12-04T10:14:54.6583600Z Alloc Granule: 4KB 2025-12-04T10:14:54.6584113Z Alloc Recommended Granule:4KB 2025-12-04T10:14:54.6584622Z Alloc Alignment: 4KB 2025-12-04T10:14:54.6585115Z Accessible by all: TRUE 2025-12-04T10:14:54.6585540Z Pool 3 2025-12-04T10:14:54.6586046Z Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED 2025-12-04T10:14:54.6586503Z Size: 1584733356(0x5e751cac) KB 2025-12-04T10:14:54.6587000Z Allocatable: TRUE 2025-12-04T10:14:54.6587485Z Alloc Granule: 4KB 2025-12-04T10:14:54.6587992Z Alloc Recommended Granule:4KB 2025-12-04T10:14:54.6588504Z Alloc Alignment: 4KB 2025-12-04T10:14:54.6589028Z Accessible by all: TRUE 2025-12-04T10:14:54.6589481Z Pool 4 2025-12-04T10:14:54.6589895Z Segment: GLOBAL; FLAGS: COARSE GRAINED 2025-12-04T10:14:54.6590354Z Size: 1584733356(0x5e751cac) KB 2025-12-04T10:14:54.6590890Z Allocatable: TRUE 2025-12-04T10:14:54.6591376Z Alloc Granule: 4KB 2025-12-04T10:14:54.6591881Z Alloc Recommended Granule:4KB 2025-12-04T10:14:54.6592428Z Alloc Alignment: 4KB 2025-12-04T10:14:54.6592997Z Accessible by all: TRUE 2025-12-04T10:14:54.6593445Z ISA Info: 2025-12-04T10:14:54.6593774Z ******* 2025-12-04T10:14:54.6594077Z Agent 2 2025-12-04T10:14:54.6594399Z ******* 2025-12-04T10:14:54.6594754Z Name: AMD EPYC 9575F 64-Core Processor 2025-12-04T10:14:54.6595231Z Uuid: CPU-XX 2025-12-04T10:14:54.6595729Z Marketing Name: AMD EPYC 9575F 64-Core Processor 2025-12-04T10:14:54.6596245Z Vendor Name: CPU 2025-12-04T10:14:54.6596737Z Feature: None specified 2025-12-04T10:14:54.6597211Z Profile: FULL_PROFILE 2025-12-04T10:14:54.6597687Z Float Round Mode: NEAR 2025-12-04T10:14:54.6598190Z Max Queue Number: 0(0x0) 2025-12-04T10:14:54.6598659Z Queue Min Size: 0(0x0) 2025-12-04T10:14:54.6599127Z Queue Max Size: 0(0x0) 2025-12-04T10:14:54.6599706Z Queue Type: MULTI 2025-12-04T10:14:54.6600148Z Node: 1 2025-12-04T10:14:54.6600640Z Device Type: CPU 2025-12-04T10:14:54.6601057Z Cache Info: 2025-12-04T10:14:54.6601410Z L1: 49152(0xc000) KB 2025-12-04T10:14:54.6601844Z Chip ID: 0(0x0) 2025-12-04T10:14:54.6602301Z ASIC Revision: 0(0x0) 2025-12-04T10:14:54.6602779Z Cacheline Size: 64(0x40) 2025-12-04T10:14:54.6603258Z Max Clock Freq. (MHz): 3300 2025-12-04T10:14:54.6603704Z BDFID: 0 2025-12-04T10:14:54.6604159Z Internal Node ID: 1 2025-12-04T10:14:54.6604632Z Compute Unit: 64 2025-12-04T10:14:54.6605094Z SIMDs per CU: 0 2025-12-04T10:14:54.6605565Z Shader Engines: 0 2025-12-04T10:14:54.6606045Z Shader Arrs. per Eng.: 0 2025-12-04T10:14:54.6606545Z WatchPts on Addr. Ranges:1 2025-12-04T10:14:54.6607168Z Memory Properties: 2025-12-04T10:14:54.6607495Z Features: None 2025-12-04T10:14:54.6607924Z Pool Info: 2025-12-04T10:14:54.6608242Z Pool 1 2025-12-04T10:14:54.6608645Z Segment: GLOBAL; FLAGS: FINE GRAINED 2025-12-04T10:14:54.6609131Z Size: 1585355580(0x5e7e9b3c) KB 2025-12-04T10:14:54.6609597Z Allocatable: TRUE 2025-12-04T10:14:54.6610097Z Alloc Granule: 4KB 2025-12-04T10:14:54.6610689Z Alloc Recommended Granule:4KB 2025-12-04T10:14:54.6611200Z Alloc Alignment: 4KB 2025-12-04T10:14:54.6611699Z Accessible by all: TRUE 2025-12-04T10:14:54.6612130Z Pool 2 2025-12-04T10:14:54.6612526Z Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED 2025-12-04T10:14:54.6613002Z Size: 1585355580(0x5e7e9b3c) KB 2025-12-04T10:14:54.6613470Z Allocatable: TRUE 2025-12-04T10:14:54.6613959Z Alloc Granule: 4KB 2025-12-04T10:14:54.6614534Z Alloc Recommended Granule:4KB 2025-12-04T10:14:54.6615091Z Alloc Alignment: 4KB 2025-12-04T10:14:54.6615580Z Accessible by all: TRUE 2025-12-04T10:14:54.6616004Z Pool 3 2025-12-04T10:14:54.6616402Z Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED 2025-12-04T10:14:54.6616858Z Size: 1585355580(0x5e7e9b3c) KB 2025-12-04T10:14:54.6617312Z Allocatable: TRUE 2025-12-04T10:14:54.6617794Z Alloc Granule: 4KB 2025-12-04T10:14:54.6618301Z Alloc Recommended Granule:4KB 2025-12-04T10:14:54.6618815Z Alloc Alignment: 4KB 2025-12-04T10:14:54.6619308Z Accessible by all: TRUE 2025-12-04T10:14:54.6619732Z Pool 4 2025-12-04T10:14:54.6620123Z Segment: GLOBAL; FLAGS: COARSE GRAINED 2025-12-04T10:14:54.6620577Z Size: 1585355580(0x5e7e9b3c) KB 2025-12-04T10:14:54.6621241Z Allocatable: TRUE 2025-12-04T10:14:54.6621842Z Alloc Granule: 4KB 2025-12-04T10:14:54.6622345Z Alloc Recommended Granule:4KB 2025-12-04T10:14:54.6622859Z Alloc Alignment: 4KB 2025-12-04T10:14:54.6623347Z Accessible by all: TRUE 2025-12-04T10:14:54.6623774Z ISA Info: 2025-12-04T10:14:54.6624088Z ******* 2025-12-04T10:14:54.6624382Z Agent 3 2025-12-04T10:14:54.6624675Z ******* 2025-12-04T10:14:54.6625011Z Name: gfx942 2025-12-04T10:14:54.6625479Z Uuid: GPU-0786bf8e0c323cdf 2025-12-04T10:14:54.6625961Z Marketing Name: AMD Instinct MI325X 2025-12-04T10:14:54.6626447Z Vendor Name: AMD 2025-12-04T10:14:54.6626923Z Feature: KERNEL_DISPATCH 2025-12-04T10:14:54.6627394Z Profile: BASE_PROFILE 2025-12-04T10:14:54.6627867Z Float Round Mode: NEAR 2025-12-04T10:14:54.6628353Z Max Queue Number: 128(0x80) 2025-12-04T10:14:54.6628829Z Queue Min Size: 64(0x40) 2025-12-04T10:14:54.6629297Z Queue Max Size: 131072(0x20000) 2025-12-04T10:14:54.6629860Z Queue Type: MULTI 2025-12-04T10:14:54.6630299Z Node: 2 2025-12-04T10:14:54.6630795Z Device Type: GPU 2025-12-04T10:14:54.6631210Z Cache Info: 2025-12-04T10:14:54.6631565Z L1: 32(0x20) KB 2025-12-04T10:14:54.6631977Z L2: 4096(0x1000) KB 2025-12-04T10:14:54.6632390Z L3: 262144(0x40000) KB 2025-12-04T10:14:54.6632815Z Chip ID: 29861(0x74a5) 2025-12-04T10:14:54.6633273Z ASIC Revision: 1(0x1) 2025-12-04T10:14:54.6633750Z Cacheline Size: 128(0x80) 2025-12-04T10:14:54.6634236Z Max Clock Freq. (MHz): 2100 2025-12-04T10:14:54.6634699Z BDFID: 29952 2025-12-04T10:14:54.6635153Z Internal Node ID: 2 2025-12-04T10:14:54.6635629Z Compute Unit: 304 2025-12-04T10:14:54.6636089Z SIMDs per CU: 4 2025-12-04T10:14:54.6636556Z Shader Engines: 32 2025-12-04T10:14:54.6637044Z Shader Arrs. per Eng.: 1 2025-12-04T10:14:54.6637543Z WatchPts on Addr. Ranges:4 2025-12-04T10:14:54.6638049Z Coherent Host Access: FALSE 2025-12-04T10:14:54.6638484Z Memory Properties: 2025-12-04T10:14:54.6638839Z Features: KERNEL_DISPATCH 2025-12-04T10:14:54.6639288Z Fast F16 Operation: TRUE 2025-12-04T10:14:54.6639776Z Wavefront Size: 64(0x40) 2025-12-04T10:14:54.6640271Z Workgroup Max Size: 1024(0x400) 2025-12-04T10:14:54.6640782Z Workgroup Max Size per Dimension: 2025-12-04T10:14:54.6641168Z x 1024(0x400) 2025-12-04T10:14:54.6641566Z y 1024(0x400) 2025-12-04T10:14:54.6641955Z z 1024(0x400) 2025-12-04T10:14:54.6642387Z Max Waves Per CU: 32(0x20) 2025-12-04T10:14:54.6643007Z Max Work-item Per CU: 2048(0x800) 2025-12-04T10:14:54.6643490Z Grid Max Size: 4294967295(0xffffffff) 2025-12-04T10:14:54.6643912Z Grid Max Size per Dimension: 2025-12-04T10:14:54.6644272Z x 4294967295(0xffffffff) 2025-12-04T10:14:54.6644670Z y 4294967295(0xffffffff) 2025-12-04T10:14:54.6645074Z z 4294967295(0xffffffff) 2025-12-04T10:14:54.6645534Z Max fbarriers/Workgrp: 32 2025-12-04T10:14:54.6655370Z Packet Processor uCode:: 185 2025-12-04T10:14:54.6655917Z SDMA engine uCode:: 24 2025-12-04T10:14:54.6656414Z IOMMU Support:: None 2025-12-04T10:14:54.6657079Z Pool Info: 2025-12-04T10:14:54.6657522Z Pool 1 2025-12-04T10:14:54.6657956Z Segment: GLOBAL; FLAGS: COARSE GRAINED 2025-12-04T10:14:54.6658440Z Size: 268419072(0xfffc000) KB 2025-12-04T10:14:54.6658911Z Allocatable: TRUE 2025-12-04T10:14:54.6659403Z Alloc Granule: 4KB 2025-12-04T10:14:54.6659916Z Alloc Recommended Granule:2048KB 2025-12-04T10:14:54.6660778Z Alloc Alignment: 4KB 2025-12-04T10:14:54.6661277Z Accessible by all: FALSE 2025-12-04T10:14:54.6661713Z Pool 2 2025-12-04T10:14:54.6662119Z Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED 2025-12-04T10:14:54.6662589Z Size: 268419072(0xfffc000) KB 2025-12-04T10:14:54.6663049Z Allocatable: TRUE 2025-12-04T10:14:54.6663544Z Alloc Granule: 4KB 2025-12-04T10:14:54.6664050Z Alloc Recommended Granule:2048KB 2025-12-04T10:14:54.6664563Z Alloc Alignment: 4KB 2025-12-04T10:14:54.6665065Z Accessible by all: FALSE 2025-12-04T10:14:54.6665491Z Pool 3 2025-12-04T10:14:54.6665894Z Segment: GLOBAL; FLAGS: FINE GRAINED 2025-12-04T10:14:54.6666361Z Size: 268419072(0xfffc000) KB 2025-12-04T10:14:54.6666822Z Allocatable: TRUE 2025-12-04T10:14:54.6667312Z Alloc Granule: 4KB 2025-12-04T10:14:54.6667818Z Alloc Recommended Granule:2048KB 2025-12-04T10:14:54.6668327Z Alloc Alignment: 4KB 2025-12-04T10:14:54.6668827Z Accessible by all: FALSE 2025-12-04T10:14:54.6669246Z Pool 4 2025-12-04T10:14:54.6669629Z Segment: GROUP 2025-12-04T10:14:54.6670073Z Size: 64(0x40) KB 2025-12-04T10:14:54.6670528Z Allocatable: FALSE 2025-12-04T10:14:54.6671074Z Alloc Granule: 0KB 2025-12-04T10:14:54.6671582Z Alloc Recommended Granule:0KB 2025-12-04T10:14:54.6672091Z Alloc Alignment: 0KB 2025-12-04T10:14:54.6672584Z Accessible by all: FALSE 2025-12-04T10:14:54.6673005Z ISA Info: 2025-12-04T10:14:54.6673320Z ISA 1 2025-12-04T10:14:54.6673729Z Name: amdgcn-amd-amdhsa--gfx942:sramecc+:xnack- 2025-12-04T10:14:54.6674376Z Machine Models: HSA_MACHINE_MODEL_LARGE 2025-12-04T10:14:54.6674886Z Profiles: HSA_PROFILE_BASE 2025-12-04T10:14:54.6675384Z Default Rounding Mode: NEAR 2025-12-04T10:14:54.6675901Z Default Rounding Mode: NEAR 2025-12-04T10:14:54.6676376Z Fast f16: TRUE 2025-12-04T10:14:54.6676858Z Workgroup Max Size: 1024(0x400) 2025-12-04T10:14:54.6677309Z Workgroup Max Size per Dimension: 2025-12-04T10:14:54.6677718Z x 1024(0x400) 2025-12-04T10:14:54.6678122Z y 1024(0x400) 2025-12-04T10:14:54.6678515Z z 1024(0x400) 2025-12-04T10:14:54.6678948Z Grid Max Size: 4294967295(0xffffffff) 2025-12-04T10:14:54.6679387Z Grid Max Size per Dimension: 2025-12-04T10:14:54.6679768Z x 4294967295(0xffffffff) 2025-12-04T10:14:54.6680175Z y 4294967295(0xffffffff) 2025-12-04T10:14:54.6680580Z z 4294967295(0xffffffff) 2025-12-04T10:14:54.6681083Z FBarrier Max Size: 32 2025-12-04T10:14:54.6681621Z ISA 2 2025-12-04T10:14:54.6682062Z Name: amdgcn-amd-amdhsa--gfx9-4-generic:sramecc+:xnack- 2025-12-04T10:14:54.6682610Z Machine Models: HSA_MACHINE_MODEL_LARGE 2025-12-04T10:14:54.6707968Z Profiles: HSA_PROFILE_BASE 2025-12-04T10:14:54.6708571Z Default Rounding Mode: NEAR 2025-12-04T10:14:54.6709106Z Default Rounding Mode: NEAR 2025-12-04T10:14:54.6709602Z Fast f16: TRUE 2025-12-04T10:14:54.6710093Z Workgroup Max Size: 1024(0x400) 2025-12-04T10:14:54.6710563Z Workgroup Max Size per Dimension: 2025-12-04T10:14:54.6711042Z x 1024(0x400) 2025-12-04T10:14:54.6711453Z y 1024(0x400) 2025-12-04T10:14:54.6711855Z z 1024(0x400) 2025-12-04T10:14:54.6712307Z Grid Max Size: 4294967295(0xffffffff) 2025-12-04T10:14:54.6712760Z Grid Max Size per Dimension: 2025-12-04T10:14:54.6713136Z x 4294967295(0xffffffff) 2025-12-04T10:14:54.6713543Z y 4294967295(0xffffffff) 2025-12-04T10:14:54.6713947Z z 4294967295(0xffffffff) 2025-12-04T10:14:54.6714407Z FBarrier Max Size: 32 2025-12-04T10:14:54.6714839Z ******* 2025-12-04T10:14:54.6715148Z Agent 4 2025-12-04T10:14:54.6715450Z ******* 2025-12-04T10:14:54.6715805Z Name: gfx942 2025-12-04T10:14:54.6716258Z Uuid: GPU-f1277e79873f2863 2025-12-04T10:14:54.6716748Z Marketing Name: AMD Instinct MI325X 2025-12-04T10:14:54.6717254Z Vendor Name: AMD 2025-12-04T10:14:54.6717733Z Feature: KERNEL_DISPATCH 2025-12-04T10:14:54.6718213Z Profile: BASE_PROFILE 2025-12-04T10:14:54.6718691Z Float Round Mode: NEAR 2025-12-04T10:14:54.6719181Z Max Queue Number: 128(0x80) 2025-12-04T10:14:54.6719859Z Queue Min Size: 64(0x40) 2025-12-04T10:14:54.6720330Z Queue Max Size: 131072(0x20000) 2025-12-04T10:14:54.6720860Z Queue Type: MULTI 2025-12-04T10:14:54.6721306Z Node: 3 2025-12-04T10:14:54.6721750Z Device Type: GPU 2025-12-04T10:14:54.6722167Z Cache Info: 2025-12-04T10:14:54.6722538Z L1: 32(0x20) KB 2025-12-04T10:14:54.6722951Z L2: 4096(0x1000) KB 2025-12-04T10:14:54.6723361Z L3: 262144(0x40000) KB 2025-12-04T10:14:54.6723781Z Chip ID: 29861(0x74a5) 2025-12-04T10:14:54.6724240Z ASIC Revision: 1(0x1) 2025-12-04T10:14:54.6724721Z Cacheline Size: 128(0x80) 2025-12-04T10:14:54.6725206Z Max Clock Freq. (MHz): 2100 2025-12-04T10:14:54.6725661Z BDFID: 1280 2025-12-04T10:14:54.6726125Z Internal Node ID: 3 2025-12-04T10:14:54.6726601Z Compute Unit: 304 2025-12-04T10:14:54.6727065Z SIMDs per CU: 4 2025-12-04T10:14:54.6727644Z Shader Engines: 32 2025-12-04T10:14:54.6728137Z Shader Arrs. per Eng.: 1 2025-12-04T10:14:54.6728641Z WatchPts on Addr. Ranges:4 2025-12-04T10:14:54.6729141Z Coherent Host Access: FALSE 2025-12-04T10:14:54.6729584Z Memory Properties: 2025-12-04T10:14:54.6729949Z Features: KERNEL_DISPATCH 2025-12-04T10:14:54.6730397Z Fast F16 Operation: TRUE 2025-12-04T10:14:54.6730944Z Wavefront Size: 64(0x40) 2025-12-04T10:14:54.6731432Z Workgroup Max Size: 1024(0x400) 2025-12-04T10:14:54.6731881Z Workgroup Max Size per Dimension: 2025-12-04T10:14:54.6732271Z x 1024(0x400) 2025-12-04T10:14:54.6732668Z y 1024(0x400) 2025-12-04T10:14:54.6733071Z z 1024(0x400) 2025-12-04T10:14:54.6733509Z Max Waves Per CU: 32(0x20) 2025-12-04T10:14:54.6733995Z Max Work-item Per CU: 2048(0x800) 2025-12-04T10:14:54.6734477Z Grid Max Size: 4294967295(0xffffffff) 2025-12-04T10:14:54.6734901Z Grid Max Size per Dimension: 2025-12-04T10:14:54.6735260Z x 4294967295(0xffffffff) 2025-12-04T10:14:54.6735672Z y 4294967295(0xffffffff) 2025-12-04T10:14:54.6736071Z z 4294967295(0xffffffff) 2025-12-04T10:14:54.6736533Z Max fbarriers/Workgrp: 32 2025-12-04T10:14:54.6737073Z Packet Processor uCode:: 185 2025-12-04T10:14:54.6737579Z SDMA engine uCode:: 24 2025-12-04T10:14:54.6738077Z IOMMU Support:: None 2025-12-04T10:14:54.6738501Z Pool Info: 2025-12-04T10:14:54.6738819Z Pool 1 2025-12-04T10:14:54.6739229Z Segment: GLOBAL; FLAGS: COARSE GRAINED 2025-12-04T10:14:54.6739710Z Size: 268419072(0xfffc000) KB 2025-12-04T10:14:54.6740183Z Allocatable: TRUE 2025-12-04T10:14:54.6740721Z Alloc Granule: 4KB 2025-12-04T10:14:54.6741337Z Alloc Recommended Granule:2048KB 2025-12-04T10:14:54.6741852Z Alloc Alignment: 4KB 2025-12-04T10:14:54.6742353Z Accessible by all: FALSE 2025-12-04T10:14:54.6742777Z Pool 2 2025-12-04T10:14:54.6743184Z Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED 2025-12-04T10:14:54.6743655Z Size: 268419072(0xfffc000) KB 2025-12-04T10:14:54.6744113Z Allocatable: TRUE 2025-12-04T10:14:54.6744599Z Alloc Granule: 4KB 2025-12-04T10:14:54.6745102Z Alloc Recommended Granule:2048KB 2025-12-04T10:14:54.6745610Z Alloc Alignment: 4KB 2025-12-04T10:14:54.6746105Z Accessible by all: FALSE 2025-12-04T10:14:54.6746534Z Pool 3 2025-12-04T10:14:54.6746927Z Segment: GLOBAL; FLAGS: FINE GRAINED 2025-12-04T10:14:54.6747384Z Size: 268419072(0xfffc000) KB 2025-12-04T10:14:54.6747838Z Allocatable: TRUE 2025-12-04T10:14:54.6748320Z Alloc Granule: 4KB 2025-12-04T10:14:54.6748952Z Alloc Recommended Granule:2048KB 2025-12-04T10:14:54.6749457Z Alloc Alignment: 4KB 2025-12-04T10:14:54.6749953Z Accessible by all: FALSE 2025-12-04T10:14:54.6750374Z Pool 4 2025-12-04T10:14:54.6750808Z Segment: GROUP 2025-12-04T10:14:54.6751251Z Size: 64(0x40) KB 2025-12-04T10:14:54.6751709Z Allocatable: FALSE 2025-12-04T10:14:54.6752196Z Alloc Granule: 0KB 2025-12-04T10:14:54.6752701Z Alloc Recommended Granule:0KB 2025-12-04T10:14:54.6753214Z Alloc Alignment: 0KB 2025-12-04T10:14:54.6753714Z Accessible by all: FALSE 2025-12-04T10:14:54.6754150Z ISA Info: 2025-12-04T10:14:54.6754468Z ISA 1 2025-12-04T10:14:54.6754878Z Name: amdgcn-amd-amdhsa--gfx942:sramecc+:xnack- 2025-12-04T10:14:54.6755394Z Machine Models: HSA_MACHINE_MODEL_LARGE 2025-12-04T10:14:54.6755904Z Profiles: HSA_PROFILE_BASE 2025-12-04T10:14:54.6756406Z Default Rounding Mode: NEAR 2025-12-04T10:14:54.6756920Z Default Rounding Mode: NEAR 2025-12-04T10:14:54.6757403Z Fast f16: TRUE 2025-12-04T10:14:54.6757872Z Workgroup Max Size: 1024(0x400) 2025-12-04T10:14:54.6758326Z Workgroup Max Size per Dimension: 2025-12-04T10:14:54.6758724Z x 1024(0x400) 2025-12-04T10:14:54.6759121Z y 1024(0x400) 2025-12-04T10:14:54.6759525Z z 1024(0x400) 2025-12-04T10:14:54.6759963Z Grid Max Size: 4294967295(0xffffffff) 2025-12-04T10:14:54.6760393Z Grid Max Size per Dimension: 2025-12-04T10:14:54.6760808Z x 4294967295(0xffffffff) 2025-12-04T10:14:54.6761213Z y 4294967295(0xffffffff) 2025-12-04T10:14:54.6761619Z z 4294967295(0xffffffff) 2025-12-04T10:14:54.6762172Z FBarrier Max Size: 32 2025-12-04T10:14:54.6762593Z ISA 2 2025-12-04T10:14:54.6763032Z Name: amdgcn-amd-amdhsa--gfx9-4-generic:sramecc+:xnack- 2025-12-04T10:14:54.6763587Z Machine Models: HSA_MACHINE_MODEL_LARGE 2025-12-04T10:14:54.6764091Z Profiles: HSA_PROFILE_BASE 2025-12-04T10:14:54.6764598Z Default Rounding Mode: NEAR 2025-12-04T10:14:54.6765110Z Default Rounding Mode: NEAR 2025-12-04T10:14:54.6765586Z Fast f16: TRUE 2025-12-04T10:14:54.6766057Z Workgroup Max Size: 1024(0x400) 2025-12-04T10:14:54.6766499Z Workgroup Max Size per Dimension: 2025-12-04T10:14:54.6766890Z x 1024(0x400) 2025-12-04T10:14:54.6767292Z y 1024(0x400) 2025-12-04T10:14:54.6767683Z z 1024(0x400) 2025-12-04T10:14:54.6768120Z Grid Max Size: 4294967295(0xffffffff) 2025-12-04T10:14:54.6768555Z Grid Max Size per Dimension: 2025-12-04T10:14:54.6768924Z x 4294967295(0xffffffff) 2025-12-04T10:14:54.6769420Z y 4294967295(0xffffffff) 2025-12-04T10:14:54.6769823Z z 4294967295(0xffffffff) 2025-12-04T10:14:54.6770277Z FBarrier Max Size: 32 2025-12-04T10:14:54.6770750Z ******* 2025-12-04T10:14:54.6771053Z Agent 5 2025-12-04T10:14:54.6771356Z ******* 2025-12-04T10:14:54.6771705Z Name: gfx942 2025-12-04T10:14:54.6772166Z Uuid: GPU-a60c6760ff6d4bed 2025-12-04T10:14:54.6772654Z Marketing Name: AMD Instinct MI325X 2025-12-04T10:14:54.6773144Z Vendor Name: AMD 2025-12-04T10:14:54.6773623Z Feature: KERNEL_DISPATCH 2025-12-04T10:14:54.6774101Z Profile: BASE_PROFILE 2025-12-04T10:14:54.6774583Z Float Round Mode: NEAR 2025-12-04T10:14:54.6775078Z Max Queue Number: 128(0x80) 2025-12-04T10:14:54.6775558Z Queue Min Size: 64(0x40) 2025-12-04T10:14:54.6776028Z Queue Max Size: 131072(0x20000) 2025-12-04T10:14:54.6776500Z Queue Type: MULTI 2025-12-04T10:14:54.6776942Z Node: 4 2025-12-04T10:14:54.6777393Z Device Type: GPU 2025-12-04T10:14:54.6777810Z Cache Info: 2025-12-04T10:14:54.6778162Z L1: 32(0x20) KB 2025-12-04T10:14:54.6778576Z L2: 4096(0x1000) KB 2025-12-04T10:14:54.6778983Z L3: 262144(0x40000) KB 2025-12-04T10:14:54.6779407Z Chip ID: 29861(0x74a5) 2025-12-04T10:14:54.6779871Z ASIC Revision: 1(0x1) 2025-12-04T10:14:54.6780345Z Cacheline Size: 128(0x80) 2025-12-04T10:14:54.6780894Z Max Clock Freq. (MHz): 2100 2025-12-04T10:14:54.6781353Z BDFID: 25856 2025-12-04T10:14:54.6781811Z Internal Node ID: 4 2025-12-04T10:14:54.6782288Z Compute Unit: 304 2025-12-04T10:14:54.6782855Z SIMDs per CU: 4 2025-12-04T10:14:54.6783333Z Shader Engines: 32 2025-12-04T10:14:54.6783827Z Shader Arrs. per Eng.: 1 2025-12-04T10:14:54.6784324Z WatchPts on Addr. Ranges:4 2025-12-04T10:14:54.6784834Z Coherent Host Access: FALSE 2025-12-04T10:14:54.6785281Z Memory Properties: 2025-12-04T10:14:54.6785666Z Features: KERNEL_DISPATCH 2025-12-04T10:14:54.6786114Z Fast F16 Operation: TRUE 2025-12-04T10:14:54.6786605Z Wavefront Size: 64(0x40) 2025-12-04T10:14:54.6787094Z Workgroup Max Size: 1024(0x400) 2025-12-04T10:14:54.6787540Z Workgroup Max Size per Dimension: 2025-12-04T10:14:54.6787922Z x 1024(0x400) 2025-12-04T10:14:54.6788325Z y 1024(0x400) 2025-12-04T10:14:54.6788716Z z 1024(0x400) 2025-12-04T10:14:54.6789151Z Max Waves Per CU: 32(0x20) 2025-12-04T10:14:54.6789642Z Max Work-item Per CU: 2048(0x800) 2025-12-04T10:14:54.6790133Z Grid Max Size: 4294967295(0xffffffff) 2025-12-04T10:14:54.6790717Z Grid Max Size per Dimension: 2025-12-04T10:14:54.6791076Z x 4294967295(0xffffffff) 2025-12-04T10:14:54.6791479Z y 4294967295(0xffffffff) 2025-12-04T10:14:54.6791877Z z 4294967295(0xffffffff) 2025-12-04T10:14:54.6792337Z Max fbarriers/Workgrp: 32 2025-12-04T10:14:54.6792863Z Packet Processor uCode:: 185 2025-12-04T10:14:54.6793375Z SDMA engine uCode:: 24 2025-12-04T10:14:54.6793866Z IOMMU Support:: None 2025-12-04T10:14:54.6794290Z Pool Info: 2025-12-04T10:14:54.6794609Z Pool 1 2025-12-04T10:14:54.6795015Z Segment: GLOBAL; FLAGS: COARSE GRAINED 2025-12-04T10:14:54.6795487Z Size: 268419072(0xfffc000) KB 2025-12-04T10:14:54.6795965Z Allocatable: TRUE 2025-12-04T10:14:54.6796464Z Alloc Granule: 4KB 2025-12-04T10:14:54.6796988Z Alloc Recommended Granule:2048KB 2025-12-04T10:14:54.6797507Z Alloc Alignment: 4KB 2025-12-04T10:14:54.6798009Z Accessible by all: FALSE 2025-12-04T10:14:54.6798444Z Pool 2 2025-12-04T10:14:54.6798852Z Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED 2025-12-04T10:14:54.6799315Z Size: 268419072(0xfffc000) KB 2025-12-04T10:14:54.6799777Z Allocatable: TRUE 2025-12-04T10:14:54.6800261Z Alloc Granule: 4KB 2025-12-04T10:14:54.6800809Z Alloc Recommended Granule:2048KB 2025-12-04T10:14:54.6801326Z Alloc Alignment: 4KB 2025-12-04T10:14:54.6801817Z Accessible by all: FALSE 2025-12-04T10:14:54.6802247Z Pool 3 2025-12-04T10:14:54.6802640Z Segment: GLOBAL; FLAGS: FINE GRAINED 2025-12-04T10:14:54.6803098Z Size: 268419072(0xfffc000) KB 2025-12-04T10:14:54.6803560Z Allocatable: TRUE 2025-12-04T10:14:54.6804147Z Alloc Granule: 4KB 2025-12-04T10:14:54.6804655Z Alloc Recommended Granule:2048KB 2025-12-04T10:14:54.6805168Z Alloc Alignment: 4KB 2025-12-04T10:14:54.6805662Z Accessible by all: FALSE 2025-12-04T10:14:54.6806091Z Pool 4 2025-12-04T10:14:54.6806471Z Segment: GROUP 2025-12-04T10:14:54.6806918Z Size: 64(0x40) KB 2025-12-04T10:14:54.6807373Z Allocatable: FALSE 2025-12-04T10:14:54.6807858Z Alloc Granule: 0KB 2025-12-04T10:14:54.6808361Z Alloc Recommended Granule:0KB 2025-12-04T10:14:54.6808870Z Alloc Alignment: 0KB 2025-12-04T10:14:54.6809370Z Accessible by all: FALSE 2025-12-04T10:14:54.6809798Z ISA Info: 2025-12-04T10:14:54.6810119Z ISA 1 2025-12-04T10:14:54.6810518Z Name: amdgcn-amd-amdhsa--gfx942:sramecc+:xnack- 2025-12-04T10:14:54.6811337Z Machine Models: HSA_MACHINE_MODEL_LARGE 2025-12-04T10:14:54.6811844Z Profiles: HSA_PROFILE_BASE 2025-12-04T10:14:54.6812442Z Default Rounding Mode: NEAR 2025-12-04T10:14:54.6812958Z Default Rounding Mode: NEAR 2025-12-04T10:14:54.6813439Z Fast f16: TRUE 2025-12-04T10:14:54.6813916Z Workgroup Max Size: 1024(0x400) 2025-12-04T10:14:54.6814374Z Workgroup Max Size per Dimension: 2025-12-04T10:14:54.6814770Z x 1024(0x400) 2025-12-04T10:14:54.6815183Z y 1024(0x400) 2025-12-04T10:14:54.6815585Z z 1024(0x400) 2025-12-04T10:14:54.6816022Z Grid Max Size: 4294967295(0xffffffff) 2025-12-04T10:14:54.6816463Z Grid Max Size per Dimension: 2025-12-04T10:14:54.6816841Z x 4294967295(0xffffffff) 2025-12-04T10:14:54.6817256Z y 4294967295(0xffffffff) 2025-12-04T10:14:54.6817670Z z 4294967295(0xffffffff) 2025-12-04T10:14:54.6818127Z FBarrier Max Size: 32 2025-12-04T10:14:54.6818564Z ISA 2 2025-12-04T10:14:54.6819005Z Name: amdgcn-amd-amdhsa--gfx9-4-generic:sramecc+:xnack- 2025-12-04T10:14:54.6819561Z Machine Models: HSA_MACHINE_MODEL_LARGE 2025-12-04T10:14:54.6820079Z Profiles: HSA_PROFILE_BASE 2025-12-04T10:14:54.6820594Z Default Rounding Mode: NEAR 2025-12-04T10:14:54.6821154Z Default Rounding Mode: NEAR 2025-12-04T10:14:54.6821637Z Fast f16: TRUE 2025-12-04T10:14:54.6822116Z Workgroup Max Size: 1024(0x400) 2025-12-04T10:14:54.6822572Z Workgroup Max Size per Dimension: 2025-12-04T10:14:54.6822967Z x 1024(0x400) 2025-12-04T10:14:54.6823359Z y 1024(0x400) 2025-12-04T10:14:54.6823756Z z 1024(0x400) 2025-12-04T10:14:54.6824193Z Grid Max Size: 4294967295(0xffffffff) 2025-12-04T10:14:54.6824621Z Grid Max Size per Dimension: 2025-12-04T10:14:54.6825126Z x 4294967295(0xffffffff) 2025-12-04T10:14:54.6825540Z y 4294967295(0xffffffff) 2025-12-04T10:14:54.6825942Z z 4294967295(0xffffffff) 2025-12-04T10:14:54.6826397Z FBarrier Max Size: 32 2025-12-04T10:14:54.6826816Z ******* 2025-12-04T10:14:54.6827122Z Agent 6 2025-12-04T10:14:54.6827425Z ******* 2025-12-04T10:14:54.6827770Z Name: gfx942 2025-12-04T10:14:54.6828222Z Uuid: GPU-0c7715a1f9faf149 2025-12-04T10:14:54.6828707Z Marketing Name: AMD Instinct MI325X 2025-12-04T10:14:54.6829200Z Vendor Name: AMD 2025-12-04T10:14:54.6829682Z Feature: KERNEL_DISPATCH 2025-12-04T10:14:54.6830164Z Profile: BASE_PROFILE 2025-12-04T10:14:54.6830699Z Float Round Mode: NEAR 2025-12-04T10:14:54.6831190Z Max Queue Number: 128(0x80) 2025-12-04T10:14:54.6831673Z Queue Min Size: 64(0x40) 2025-12-04T10:14:54.6832148Z Queue Max Size: 131072(0x20000) 2025-12-04T10:14:54.6832620Z Queue Type: MULTI 2025-12-04T10:14:54.6833168Z Node: 5 2025-12-04T10:14:54.6833614Z Device Type: GPU 2025-12-04T10:14:54.6834029Z Cache Info: 2025-12-04T10:14:54.6834389Z L1: 32(0x20) KB 2025-12-04T10:14:54.6834807Z L2: 4096(0x1000) KB 2025-12-04T10:14:54.6835212Z L3: 262144(0x40000) KB 2025-12-04T10:14:54.6835646Z Chip ID: 29861(0x74a5) 2025-12-04T10:14:54.6836112Z ASIC Revision: 1(0x1) 2025-12-04T10:14:54.6836594Z Cacheline Size: 128(0x80) 2025-12-04T10:14:54.6837081Z Max Clock Freq. (MHz): 2100 2025-12-04T10:14:54.6837535Z BDFID: 5376 2025-12-04T10:14:54.6838008Z Internal Node ID: 5 2025-12-04T10:14:54.6838485Z Compute Unit: 304 2025-12-04T10:14:54.6838945Z SIMDs per CU: 4 2025-12-04T10:14:54.6839418Z Shader Engines: 32 2025-12-04T10:14:54.6839914Z Shader Arrs. per Eng.: 1 2025-12-04T10:14:54.6840411Z WatchPts on Addr. Ranges:4 2025-12-04T10:14:54.6840986Z Coherent Host Access: FALSE 2025-12-04T10:14:54.6841433Z Memory Properties: 2025-12-04T10:14:54.6841790Z Features: KERNEL_DISPATCH 2025-12-04T10:14:54.6842242Z Fast F16 Operation: TRUE 2025-12-04T10:14:54.6842732Z Wavefront Size: 64(0x40) 2025-12-04T10:14:54.6843226Z Workgroup Max Size: 1024(0x400) 2025-12-04T10:14:54.6843679Z Workgroup Max Size per Dimension: 2025-12-04T10:14:54.6844062Z x 1024(0x400) 2025-12-04T10:14:54.6844465Z y 1024(0x400) 2025-12-04T10:14:54.6844863Z z 1024(0x400) 2025-12-04T10:14:54.6845297Z Max Waves Per CU: 32(0x20) 2025-12-04T10:14:54.6845791Z Max Work-item Per CU: 2048(0x800) 2025-12-04T10:14:54.6846368Z Grid Max Size: 4294967295(0xffffffff) 2025-12-04T10:14:54.6846798Z Grid Max Size per Dimension: 2025-12-04T10:14:54.6847162Z x 4294967295(0xffffffff) 2025-12-04T10:14:54.6847562Z y 4294967295(0xffffffff) 2025-12-04T10:14:54.6847971Z z 4294967295(0xffffffff) 2025-12-04T10:14:54.6848447Z Max fbarriers/Workgrp: 32 2025-12-04T10:14:54.6848972Z Packet Processor uCode:: 185 2025-12-04T10:14:54.6849489Z SDMA engine uCode:: 24 2025-12-04T10:14:54.6849987Z IOMMU Support:: None 2025-12-04T10:14:54.6850413Z Pool Info: 2025-12-04T10:14:54.6850794Z Pool 1 2025-12-04T10:14:54.6851207Z Segment: GLOBAL; FLAGS: COARSE GRAINED 2025-12-04T10:14:54.6851708Z Size: 268419072(0xfffc000) KB 2025-12-04T10:14:54.6852188Z Allocatable: TRUE 2025-12-04T10:14:54.6852687Z Alloc Granule: 4KB 2025-12-04T10:14:54.6853209Z Alloc Recommended Granule:2048KB 2025-12-04T10:14:54.6853727Z Alloc Alignment: 4KB 2025-12-04T10:14:54.6854323Z Accessible by all: FALSE 2025-12-04T10:14:54.6854764Z Pool 2 2025-12-04T10:14:54.6855172Z Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED 2025-12-04T10:14:54.6855651Z Size: 268419072(0xfffc000) KB 2025-12-04T10:14:54.6856121Z Allocatable: TRUE 2025-12-04T10:14:54.6856612Z Alloc Granule: 4KB 2025-12-04T10:14:54.6857141Z Alloc Recommended Granule:2048KB 2025-12-04T10:14:54.6857659Z Alloc Alignment: 4KB 2025-12-04T10:14:54.6858157Z Accessible by all: FALSE 2025-12-04T10:14:54.6858588Z Pool 3 2025-12-04T10:14:54.6858988Z Segment: GLOBAL; FLAGS: FINE GRAINED 2025-12-04T10:14:54.6859460Z Size: 268419072(0xfffc000) KB 2025-12-04T10:14:54.6859927Z Allocatable: TRUE 2025-12-04T10:14:54.6860413Z Alloc Granule: 4KB 2025-12-04T10:14:54.6861002Z Alloc Recommended Granule:2048KB 2025-12-04T10:14:54.6861517Z Alloc Alignment: 4KB 2025-12-04T10:14:54.6862014Z Accessible by all: FALSE 2025-12-04T10:14:54.6862452Z Pool 4 2025-12-04T10:14:54.6862839Z Segment: GROUP 2025-12-04T10:14:54.6863296Z Size: 64(0x40) KB 2025-12-04T10:14:54.6863763Z Allocatable: FALSE 2025-12-04T10:14:54.6864249Z Alloc Granule: 0KB 2025-12-04T10:14:54.6864773Z Alloc Recommended Granule:0KB 2025-12-04T10:14:54.6865287Z Alloc Alignment: 0KB 2025-12-04T10:14:54.6865785Z Accessible by all: FALSE 2025-12-04T10:14:54.6866217Z ISA Info: 2025-12-04T10:14:54.6866543Z ISA 1 2025-12-04T10:14:54.6866945Z Name: amdgcn-amd-amdhsa--gfx942:sramecc+:xnack- 2025-12-04T10:14:54.6867470Z Machine Models: HSA_MACHINE_MODEL_LARGE 2025-12-04T10:14:54.6868071Z Profiles: HSA_PROFILE_BASE 2025-12-04T10:14:54.6868594Z Default Rounding Mode: NEAR 2025-12-04T10:14:54.6869116Z Default Rounding Mode: NEAR 2025-12-04T10:14:54.6869595Z Fast f16: TRUE 2025-12-04T10:14:54.6870073Z Workgroup Max Size: 1024(0x400) 2025-12-04T10:14:54.6870535Z Workgroup Max Size per Dimension: 2025-12-04T10:14:54.6870996Z x 1024(0x400) 2025-12-04T10:14:54.6871403Z y 1024(0x400) 2025-12-04T10:14:54.6871801Z z 1024(0x400) 2025-12-04T10:14:54.6872245Z Grid Max Size: 4294967295(0xffffffff) 2025-12-04T10:14:54.6872683Z Grid Max Size per Dimension: 2025-12-04T10:14:54.6873063Z x 4294967295(0xffffffff) 2025-12-04T10:14:54.6873478Z y 4294967295(0xffffffff) 2025-12-04T10:14:54.6873886Z z 4294967295(0xffffffff) 2025-12-04T10:14:54.6874342Z FBarrier Max Size: 32 2025-12-04T10:14:54.6874772Z ISA 2 2025-12-04T10:14:54.6875294Z Name: amdgcn-amd-amdhsa--gfx9-4-generic:sramecc+:xnack- 2025-12-04T10:14:54.6875848Z Machine Models: HSA_MACHINE_MODEL_LARGE 2025-12-04T10:14:54.6876358Z Profiles: HSA_PROFILE_BASE 2025-12-04T10:14:54.6876860Z Default Rounding Mode: NEAR 2025-12-04T10:14:54.6877411Z Default Rounding Mode: NEAR 2025-12-04T10:14:54.6877895Z Fast f16: TRUE 2025-12-04T10:14:54.6878378Z Workgroup Max Size: 1024(0x400) 2025-12-04T10:14:54.6878832Z Workgroup Max Size per Dimension: 2025-12-04T10:14:54.6879229Z x 1024(0x400) 2025-12-04T10:14:54.6879631Z y 1024(0x400) 2025-12-04T10:14:54.6880029Z z 1024(0x400) 2025-12-04T10:14:54.6880479Z Grid Max Size: 4294967295(0xffffffff) 2025-12-04T10:14:54.6880968Z Grid Max Size per Dimension: 2025-12-04T10:14:54.6881348Z x 4294967295(0xffffffff) 2025-12-04T10:14:54.6881753Z y 4294967295(0xffffffff) 2025-12-04T10:14:54.6882159Z z 4294967295(0xffffffff) 2025-12-04T10:14:54.6882619Z FBarrier Max Size: 32 2025-12-04T10:14:54.6883048Z *** Done *** 2025-12-04T10:14:54.6909861Z ##[group]Run ngpu=$(rocminfo | grep -c -E 'Name:.*\sgfx') 2025-12-04T10:14:54.6910439Z ngpu=$(rocminfo | grep -c -E 'Name:.*\sgfx') 2025-12-04T10:14:54.6911426Z msg="Please file an issue on pytorch/pytorch reporting the faulty runner. Include a link to the runner logs so the runner can be identified" 2025-12-04T10:14:54.6912278Z if [[ $ngpu -eq 0 ]]; then 2025-12-04T10:14:54.6912762Z  echo "Error: Failed to detect any GPUs on the runner" 2025-12-04T10:14:54.6913221Z  echo "$msg" 2025-12-04T10:14:54.6913545Z  exit 1 2025-12-04T10:14:54.6913840Z fi 2025-12-04T10:14:54.6922654Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-12-04T10:14:54.6923109Z env: 2025-12-04T10:14:54.6923387Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:14:54.6923716Z ##[endgroup] 2025-12-04T10:14:54.8002310Z ##[group]Run pytorch/pytorch/.github/actions/diskspace-cleanup@main 2025-12-04T10:14:54.8003033Z with: 2025-12-04T10:14:54.8003330Z diskspace-cutoff: 70 2025-12-04T10:14:54.8003630Z env: 2025-12-04T10:14:54.8003905Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:14:54.8004225Z ##[endgroup] 2025-12-04T10:14:54.8059727Z ##[group]Run set -ex 2025-12-04T10:14:54.8060128Z set -ex 2025-12-04T10:14:54.8060428Z diskspace_cutoff=70 2025-12-04T10:14:54.8061050Z docker_root_dir=$(docker info -f '{{.DockerRootDir}}') 2025-12-04T10:14:54.8061588Z if [ ! -d "$docker_root_dir" ]; then 2025-12-04T10:14:54.8062227Z  echo "Docker root directory ($docker_root_dir) does not exist. Skipping disk space check." 2025-12-04T10:14:54.8062817Z  exit 0 2025-12-04T10:14:54.8063100Z fi 2025-12-04T10:14:54.8063655Z diskspace=$(df -H --output=pcent ${docker_root_dir} | sed -n 2p | sed 's/%//' | sed 's/ //') 2025-12-04T10:14:54.8064720Z msg="Please file an issue on pytorch/pytorch reporting the faulty runner. Include a link to the runner logs so the runner can be identified" 2025-12-04T10:14:54.8065638Z if [[ "$diskspace" -ge "$diskspace_cutoff" ]] ; then 2025-12-04T10:14:54.8066108Z  docker system prune -af 2025-12-04T10:14:54.8066717Z  diskspace_new=$(df -H --output=pcent ${docker_root_dir} | sed -n 2p | sed 's/%//' | sed 's/ //') 2025-12-04T10:14:54.8067415Z  if [[ "$diskspace_new" -gt "$diskspace_cutoff" ]] ; then 2025-12-04T10:14:54.8068130Z  diskspace_cutoff_int=$((diskspace_cutoff + 0)) 2025-12-04T10:14:54.8068623Z  difference=$((100 - diskspace_cutoff_int)) 2025-12-04T10:14:54.8069284Z  echo "Error: Available diskspace is less than $difference percent. Not enough diskspace." 2025-12-04T10:14:54.8069880Z  echo "$msg" 2025-12-04T10:14:54.8070205Z  exit 1 2025-12-04T10:14:54.8070498Z  else 2025-12-04T10:14:54.8070928Z  difference=$((diskspace - diskspace_new)) 2025-12-04T10:14:54.8071415Z  echo "Diskspace saved: $difference percent" 2025-12-04T10:14:54.8071818Z  fi 2025-12-04T10:14:54.8072079Z fi 2025-12-04T10:14:54.8082212Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-12-04T10:14:54.8082658Z env: 2025-12-04T10:14:54.8082928Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:14:54.8083249Z ##[endgroup] 2025-12-04T10:14:54.8122303Z + diskspace_cutoff=70 2025-12-04T10:14:54.8128731Z ++ docker info -f '{{.DockerRootDir}}' 2025-12-04T10:14:54.8461658Z + docker_root_dir=/home/runner/docker-data 2025-12-04T10:14:54.8462090Z + '[' '!' -d /home/runner/docker-data ']' 2025-12-04T10:14:54.8469068Z ++ df -H --output=pcent /home/runner/docker-data 2025-12-04T10:14:54.8469476Z ++ sed -n 2p 2025-12-04T10:14:54.8470188Z ++ sed s/%// 2025-12-04T10:14:54.8470453Z ++ sed 's/ //' 2025-12-04T10:14:54.8491540Z + diskspace=' 3' 2025-12-04T10:14:54.8493146Z + msg='Please file an issue on pytorch/pytorch reporting the faulty runner. Include a link to the runner logs so the runner can be identified' 2025-12-04T10:14:54.8494046Z + [[ 3 -ge 70 ]] 2025-12-04T10:14:54.8543840Z ##[group]Run RUNNER_ARTIFACT_DIR="${RUNNER_TEMP}/artifacts" 2025-12-04T10:14:54.8544472Z RUNNER_ARTIFACT_DIR="${RUNNER_TEMP}/artifacts" 2025-12-04T10:14:54.8544937Z rm -rf "${RUNNER_ARTIFACT_DIR}" 2025-12-04T10:14:54.8545352Z mkdir -p "${RUNNER_ARTIFACT_DIR}" 2025-12-04T10:14:54.8545911Z echo "RUNNER_ARTIFACT_DIR=${RUNNER_ARTIFACT_DIR}" >> "${GITHUB_ENV}" 2025-12-04T10:14:54.8546403Z  2025-12-04T10:14:54.8546778Z RUNNER_TEST_RESULTS_DIR="${RUNNER_TEMP}/test-results" 2025-12-04T10:14:54.8547272Z rm -rf "${RUNNER_TEST_RESULTS_DIR}" 2025-12-04T10:14:54.8547686Z mkdir -p "${RUNNER_TEST_RESULTS_DIR}" 2025-12-04T10:14:54.8548247Z echo "RUNNER_TEST_RESULTS_DIR=${RUNNER_TEST_RESULTS_DIR}" >> "${GITHUB_ENV}" 2025-12-04T10:14:54.8548774Z  2025-12-04T10:14:54.8549321Z RUNNER_DOCS_DIR="${RUNNER_TEMP}/docs" 2025-12-04T10:14:54.8549732Z rm -rf "${RUNNER_DOCS_DIR}" 2025-12-04T10:14:54.8550111Z mkdir -p "${RUNNER_DOCS_DIR}" 2025-12-04T10:14:54.8550595Z echo "RUNNER_DOCS_DIR=${RUNNER_DOCS_DIR}" >> "${GITHUB_ENV}" 2025-12-04T10:14:54.8560712Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-12-04T10:14:54.8561155Z env: 2025-12-04T10:14:54.8561439Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:14:54.8561754Z ##[endgroup] 2025-12-04T10:14:54.8673213Z ##[group]Run env | grep '^GITHUB' >> "${RUNNER_TEMP}/github_env_${GITHUB_RUN_ID}" 2025-12-04T10:14:54.8673903Z env | grep '^GITHUB' >> "${RUNNER_TEMP}/github_env_${GITHUB_RUN_ID}" 2025-12-04T10:14:54.8674495Z env | grep '^CI' >> "${RUNNER_TEMP}/github_env_${GITHUB_RUN_ID}" 2025-12-04T10:14:54.8684710Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-12-04T10:14:54.8685157Z env: 2025-12-04T10:14:54.8685426Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:14:54.8685859Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T10:14:54.8686408Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T10:14:54.8686911Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T10:14:54.8687307Z ##[endgroup] 2025-12-04T10:14:54.8771956Z ##[group]Run # All GPUs are visible to the runner; visibility, if needed, will be set by run_test.py. 2025-12-04T10:14:54.8772985Z # All GPUs are visible to the runner; visibility, if needed, will be set by run_test.py. 2025-12-04T10:14:54.8773611Z # Add render group for container creation. 2025-12-04T10:14:54.8774133Z render_gid=`cat /etc/group | grep render | cut -d: -f3` 2025-12-04T10:14:54.8774767Z # Ensure GPU isolation if pod is part of kubernetes setup with DEVICE_FLAG. 2025-12-04T10:14:54.8775401Z if [ -f "/etc/podinfo/gha-render-devices" ]; then 2025-12-04T10:14:54.8775941Z  DEVICE_FLAG=$(cat /etc/podinfo/gha-render-devices) 2025-12-04T10:14:54.8776378Z else 2025-12-04T10:14:54.8776688Z  DEVICE_FLAG="--device /dev/dri" 2025-12-04T10:14:54.8777050Z fi 2025-12-04T10:14:54.8777618Z # The --group-add daemon and --group-add bin are needed in the Ubuntu 24.04 and Almalinux OSs respectively. 2025-12-04T10:14:54.8778503Z # This is due to the device files (/dev/kfd & /dev/dri) being owned by video group on bare metal. 2025-12-04T10:14:54.8779317Z # This video group ID maps to subgid 1 inside the docker image due to the /etc/subgid entries. 2025-12-04T10:14:54.8780172Z # The group name corresponding to group ID 1 can change depending on the OS, so both are necessary. 2025-12-04T10:14:54.8781951Z echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd $DEVICE_FLAG --group-add video --group-add $render_gid --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host" >> "${GITHUB_ENV}" 2025-12-04T10:14:54.8791247Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-12-04T10:14:54.8791684Z env: 2025-12-04T10:14:54.8791953Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:14:54.8792355Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T10:14:54.8812523Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T10:14:54.8813045Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T10:14:54.8813450Z ##[endgroup] 2025-12-04T10:14:54.8938720Z ##[group]Run aws-actions/configure-aws-credentials@ececac1a45f3b08a01d2dd070d28d111c5fe6722 2025-12-04T10:14:54.8939344Z with: 2025-12-04T10:14:54.8939803Z role-to-assume: arn:aws:iam::308535385114:role/gha_workflow_s3_and_ecr_read_only 2025-12-04T10:14:54.8940345Z aws-region: us-east-1 2025-12-04T10:14:54.8940778Z role-duration-seconds: 18000 2025-12-04T10:14:54.8941151Z audience: sts.amazonaws.com 2025-12-04T10:14:54.8941472Z env: 2025-12-04T10:14:54.8941739Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:14:54.8942346Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T10:14:54.8942902Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T10:14:54.8943420Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T10:14:54.8945076Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T10:14:54.8946684Z ##[endgroup] 2025-12-04T10:14:55.2327406Z Assuming role with OIDC 2025-12-04T10:14:55.5903888Z Authenticated as assumedRoleId AROAUPVRELQNLLCOPFEJR:GitHubActions 2025-12-04T10:14:55.6948116Z ##[group]Run aws-actions/amazon-ecr-login@062b18b96a7aff071d4dc91bc00c4c1a7945b076 2025-12-04T10:14:55.6949216Z with: 2025-12-04T10:14:55.6949516Z mask-password: true 2025-12-04T10:14:55.6949855Z registry-type: private 2025-12-04T10:14:55.6950213Z skip-logout: false 2025-12-04T10:14:55.6950516Z env: 2025-12-04T10:14:55.6950865Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:14:55.6951306Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T10:14:55.6951871Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T10:14:55.6952399Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T10:14:55.6954063Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T10:14:55.6955866Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T10:14:55.6956237Z AWS_REGION: us-east-1 2025-12-04T10:14:55.6957066Z AWS_ACCESS_KEY_ID: *** 2025-12-04T10:14:55.6957558Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T10:14:55.6964806Z AWS_SESSION_TOKEN: *** 2025-12-04T10:14:55.6965145Z ##[endgroup] 2025-12-04T10:14:56.1314761Z Logging into registry 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-12-04T10:14:56.7885484Z ##[group]Run env | grep '^GITHUB' >> "${RUNNER_TEMP}/github_env_${GITHUB_RUN_ID}" 2025-12-04T10:14:56.7886223Z env | grep '^GITHUB' >> "${RUNNER_TEMP}/github_env_${GITHUB_RUN_ID}" 2025-12-04T10:14:56.7886848Z env | grep '^CI' >> "${RUNNER_TEMP}/github_env_${GITHUB_RUN_ID}" 2025-12-04T10:14:56.7887494Z env | grep '^RUNNER' >> "${RUNNER_TEMP}/github_env_${GITHUB_RUN_ID}" 2025-12-04T10:14:56.7898508Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-12-04T10:14:56.7898978Z env: 2025-12-04T10:14:56.7899281Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:14:56.7899721Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T10:14:56.7900292Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T10:14:56.7900896Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T10:14:56.7902496Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T10:14:56.7904108Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T10:14:56.7904484Z AWS_REGION: us-east-1 2025-12-04T10:14:56.7905008Z AWS_ACCESS_KEY_ID: *** 2025-12-04T10:14:56.7905507Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T10:14:56.7913211Z AWS_SESSION_TOKEN: *** 2025-12-04T10:14:56.7913552Z ##[endgroup] 2025-12-04T10:14:56.8071493Z ##[group]Run ngpu=$(rocminfo | grep -c -E 'Name:.*\sgfx') 2025-12-04T10:14:56.8072112Z ngpu=$(rocminfo | grep -c -E 'Name:.*\sgfx') 2025-12-04T10:14:56.8072911Z if [[ $ngpu -lt 2 ]]; then #We are temporarily reducing this down to 2 from 4 so that we can run tests on nodes with less gpus. 2025-12-04T10:14:56.8073849Z  echo "Error: only $ngpu GPU(s) detected, at least 2 GPUs are needed for distributed jobs" 2025-12-04T10:14:56.8074436Z  exit 1 2025-12-04T10:14:56.8074733Z fi 2025-12-04T10:14:56.8084560Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-12-04T10:14:56.8085025Z env: 2025-12-04T10:14:56.8085328Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:14:56.8085764Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T10:14:56.8086358Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T10:14:56.8086908Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T10:14:56.8088602Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T10:14:56.8090204Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T10:14:56.8090593Z AWS_REGION: us-east-1 2025-12-04T10:14:56.8091485Z AWS_ACCESS_KEY_ID: *** 2025-12-04T10:14:56.8091986Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T10:14:56.8098979Z AWS_SESSION_TOKEN: *** 2025-12-04T10:14:56.8099316Z ##[endgroup] 2025-12-04T10:14:56.9392132Z ##[group]Run pytorch/test-infra/.github/actions/calculate-docker-image@main 2025-12-04T10:14:56.9392705Z with: 2025-12-04T10:14:56.9393811Z docker-image-name: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-rocm-n-py3-f0cd68561080d537ef3d3d6f81b25a6416ad600a 2025-12-04T10:14:56.9394793Z use-custom-docker-registry: true 2025-12-04T10:14:56.9395206Z docker-build-dir: .ci/docker 2025-12-04T10:14:56.9395596Z docker-build-script: ./build.sh 2025-12-04T10:14:56.9395982Z working-directory: . 2025-12-04T10:14:56.9396442Z docker-registry: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-12-04T10:14:56.9396931Z force-push: false 2025-12-04T10:14:56.9397235Z env: 2025-12-04T10:14:56.9397524Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:14:56.9397957Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T10:14:56.9398521Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T10:14:56.9399114Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T10:14:56.9400869Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T10:14:56.9402486Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T10:14:56.9402851Z AWS_REGION: us-east-1 2025-12-04T10:14:56.9403322Z AWS_ACCESS_KEY_ID: *** 2025-12-04T10:14:56.9403806Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T10:14:56.9410963Z AWS_SESSION_TOKEN: *** 2025-12-04T10:14:56.9411290Z ##[endgroup] 2025-12-04T10:14:56.9435672Z ##[group]Run set -ex 2025-12-04T10:14:56.9436065Z set -ex 2025-12-04T10:14:56.9436364Z  2025-12-04T10:14:56.9436870Z # If the docker build directory or the build script doesn't exist, the action will 2025-12-04T10:14:56.9437681Z # gracefully return the docker image name as it is. Pulling docker image in Linux 2025-12-04T10:14:56.9438378Z # job could then download the pre-built image as usual 2025-12-04T10:14:56.9439229Z if [[ -d "${DOCKER_BUILD_DIR}" ]] && [[ -f "${DOCKER_BUILD_DIR}/${DOCKER_BUILD_SCRIPT}" ]] && [[ "${USE_CUSTOM_DOCKER_REGISTRY}" == "true" ]]; then 2025-12-04T10:14:56.9440010Z  echo "skip=false" >> "${GITHUB_OUTPUT}" 2025-12-04T10:14:56.9440433Z else 2025-12-04T10:14:56.9440854Z  echo "skip=true" >> "${GITHUB_OUTPUT}" 2025-12-04T10:14:56.9441416Z  echo "docker-image=${DOCKER_IMAGE_NAME}" >> "${GITHUB_OUTPUT}" 2025-12-04T10:14:56.9441909Z  2025-12-04T10:14:56.9442582Z  echo "Not using custom ECR registry. Either it was not requested or there is no Docker build script in the ${REPO_NAME} repo..." 2025-12-04T10:14:56.9443331Z  exit 0 2025-12-04T10:14:56.9443624Z fi 2025-12-04T10:14:56.9443904Z  2025-12-04T10:14:56.9444352Z if [[ "${DOCKER_IMAGE_NAME}" == *"${DOCKER_REGISTRY}/${REPO_NAME}"* ]]; then 2025-12-04T10:14:56.9445103Z  # The docker image name already includes the ECR prefix and tag, so we can just 2025-12-04T10:14:56.9445767Z  # use it as it is, but first let's extract the tag 2025-12-04T10:14:56.9446373Z  DOCKER_TAG=$(echo "${DOCKER_IMAGE_NAME}" | awk -F '[:,]' '{print $2}') 2025-12-04T10:14:56.9447003Z  echo "docker-tag=${DOCKER_TAG}" >> "${GITHUB_OUTPUT}" 2025-12-04T10:14:56.9447608Z  echo "docker-image=${DOCKER_IMAGE_NAME}" >> "${GITHUB_OUTPUT}" 2025-12-04T10:14:56.9448105Z else 2025-12-04T10:14:56.9448463Z  if [[ "${DOCKER_IMAGE_NAME}" == *:* ]]; then 2025-12-04T10:14:56.9448952Z  CUSTOM_TAG_PREFIX=${DOCKER_IMAGE_NAME#*:} 2025-12-04T10:14:56.9449448Z  DOCKER_IMAGE_NAME=${DOCKER_IMAGE_NAME%%:*} 2025-12-04T10:14:56.9449860Z  fi 2025-12-04T10:14:56.9450702Z  DOCKER_TAG=${CUSTOM_TAG_PREFIX:+${CUSTOM_TAG_PREFIX}-}$(git rev-parse HEAD:"${DOCKER_BUILD_DIR}") 2025-12-04T10:14:56.9451445Z  echo "docker-tag=${DOCKER_TAG}" >> "${GITHUB_OUTPUT}" 2025-12-04T10:14:56.9452586Z  echo "docker-image=${DOCKER_REGISTRY}/${REPO_NAME}/${DOCKER_IMAGE_NAME}:${DOCKER_TAG}" >> "${GITHUB_OUTPUT}" 2025-12-04T10:14:56.9453429Z  echo "custom-tag-prefix=${CUSTOM_TAG_PREFIX}" >> "${GITHUB_OUTPUT}" 2025-12-04T10:14:56.9453949Z fi 2025-12-04T10:14:56.9463047Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-12-04T10:14:56.9463515Z env: 2025-12-04T10:14:56.9463817Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:14:56.9464255Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T10:14:56.9464826Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T10:14:56.9465373Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T10:14:56.9467028Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T10:14:56.9468646Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T10:14:56.9469026Z AWS_REGION: us-east-1 2025-12-04T10:14:56.9469478Z AWS_ACCESS_KEY_ID: *** 2025-12-04T10:14:56.9469968Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T10:14:56.9477204Z AWS_SESSION_TOKEN: *** 2025-12-04T10:14:56.9477547Z REPO_NAME: pytorch 2025-12-04T10:14:56.9478466Z DOCKER_IMAGE_NAME: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-rocm-n-py3-f0cd68561080d537ef3d3d6f81b25a6416ad600a 2025-12-04T10:14:56.9479427Z DOCKER_BUILD_DIR: .ci/docker 2025-12-04T10:14:56.9479811Z DOCKER_BUILD_SCRIPT: ./build.sh 2025-12-04T10:14:56.9480825Z DOCKER_REGISTRY: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-12-04T10:14:56.9481348Z USE_CUSTOM_DOCKER_REGISTRY: true 2025-12-04T10:14:56.9481736Z CUSTOM_TAG_PREFIX: 2025-12-04T10:14:56.9482071Z ##[endgroup] 2025-12-04T10:14:56.9514083Z + [[ -d .ci/docker ]] 2025-12-04T10:14:56.9514497Z + [[ -f .ci/docker/./build.sh ]] 2025-12-04T10:14:56.9514875Z + [[ true == \t\r\u\e ]] 2025-12-04T10:14:56.9515212Z + echo skip=false 2025-12-04T10:14:56.9516467Z + [[ 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-rocm-n-py3-f0cd68561080d537ef3d3d6f81b25a6416ad600a == *\3\0\8\5\3\5\3\8\5\1\1\4\.\d\k\r\.\e\c\r\.\u\s\-\e\a\s\t\-\1\.\a\m\a\z\o\n\a\w\s\.\c\o\m\/\p\y\t\o\r\c\h* ]] 2025-12-04T10:14:56.9526049Z ++ echo 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-rocm-n-py3-f0cd68561080d537ef3d3d6f81b25a6416ad600a 2025-12-04T10:14:56.9526957Z ++ awk -F '[:,]' '{print $2}' 2025-12-04T10:14:56.9543697Z + DOCKER_TAG=pytorch-linux-jammy-rocm-n-py3-f0cd68561080d537ef3d3d6f81b25a6416ad600a 2025-12-04T10:14:56.9544621Z + echo docker-tag=pytorch-linux-jammy-rocm-n-py3-f0cd68561080d537ef3d3d6f81b25a6416ad600a 2025-12-04T10:14:56.9546497Z + echo docker-image=308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-rocm-n-py3-f0cd68561080d537ef3d3d6f81b25a6416ad600a 2025-12-04T10:14:56.9622700Z ##[group]Run set +e 2025-12-04T10:14:56.9623137Z set +e 2025-12-04T10:14:56.9623446Z set -x 2025-12-04T10:14:56.9623745Z  2025-12-04T10:14:56.9624031Z login() { 2025-12-04T10:14:56.9624658Z  aws ecr get-login-password --region us-east-1 | docker login -u AWS --password-stdin "$1" 2025-12-04T10:14:56.9625287Z } 2025-12-04T10:14:56.9625565Z  2025-12-04T10:14:56.9625847Z retry () { 2025-12-04T10:14:56.9626213Z  $* || (sleep 1 && $*) || (sleep 2 && $*) 2025-12-04T10:14:56.9626619Z } 2025-12-04T10:14:56.9626888Z  2025-12-04T10:14:56.9627200Z retry login "${DOCKER_REGISTRY}" 2025-12-04T10:14:56.9627587Z  2025-12-04T10:14:56.9628107Z START_TIME=$(date +%s) 2025-12-04T10:14:56.9628511Z # Wait up to 120 minutes 2025-12-04T10:14:56.9628995Z while [[ $(( $(date +%s) - 7200 )) -lt $START_TIME ]]; do 2025-12-04T10:14:56.9629813Z  # Check if image already exists, if it does then skip building it 2025-12-04T10:14:56.9630431Z  if docker manifest inspect "${DOCKER_IMAGE}"; then 2025-12-04T10:14:56.9630952Z  exit 0 2025-12-04T10:14:56.9631263Z  fi 2025-12-04T10:14:56.9631543Z  2025-12-04T10:14:56.9632022Z  # NB: This flag is used by Docker build workflow to push the image to ECR, so we can 2025-12-04T10:14:56.9632831Z  # use this to differentiate between the Docker build and regular build jobs. For the 2025-12-04T10:14:56.9633624Z  # latter, it will wait for the Docker images to become available before continuing 2025-12-04T10:14:56.9634268Z  if [ "${DOCKER_PUSH:-false}" == "true" ]; then 2025-12-04T10:14:56.9634792Z  # It's a Docker build job, let's build the image 2025-12-04T10:14:56.9635228Z  break 2025-12-04T10:14:56.9635543Z  else 2025-12-04T10:14:56.9635985Z  # It's a regular build job, wait for the image to become available 2025-12-04T10:14:56.9636493Z  sleep 300 2025-12-04T10:14:56.9636814Z  fi 2025-12-04T10:14:56.9637108Z done 2025-12-04T10:14:56.9637391Z  2025-12-04T10:14:56.9637837Z # NB: This part requires a full checkout. Otherwise, the merge base will 2025-12-04T10:14:56.9638565Z # be empty. The default action would be to continue rebuild the image 2025-12-04T10:14:56.9639206Z if [[ "$BASE_REVISION" = "$(git rev-parse HEAD)" ]]; then 2025-12-04T10:14:56.9639779Z  # if we're on the base branch then use the parent commit 2025-12-04T10:14:56.9640290Z  MERGE_BASE=$(git rev-parse HEAD~) 2025-12-04T10:14:56.9640751Z else 2025-12-04T10:14:56.9641172Z  # otherwise we're on a PR, so use the most recent base commit 2025-12-04T10:14:56.9641776Z  MERGE_BASE=$(git merge-base HEAD "$BASE_REVISION") 2025-12-04T10:14:56.9642260Z fi 2025-12-04T10:14:56.9642545Z  2025-12-04T10:14:56.9642867Z if [[ -z "${MERGE_BASE}" ]]; then 2025-12-04T10:14:56.9643333Z  echo "rebuild=true" >> "${GITHUB_OUTPUT}" 2025-12-04T10:14:56.9643753Z  2025-12-04T10:14:56.9644324Z  echo "Finding merge base only works with full checkout, please set fetch-depth to 0, continuing ..." 2025-12-04T10:14:56.9644975Z  exit 0 2025-12-04T10:14:56.9645271Z fi 2025-12-04T10:14:56.9645552Z  2025-12-04T10:14:56.9645949Z if ! git rev-parse "${MERGE_BASE}:${DOCKER_BUILD_DIR}"; then 2025-12-04T10:14:56.9646770Z  echo "Directory '${DOCKER_BUILD_DIR}' not found in commit $MERGE_BASE, you should rebase onto a more recent commit" 2025-12-04T10:14:56.9647467Z  exit 1 2025-12-04T10:14:56.9647762Z fi 2025-12-04T10:14:56.9648045Z  2025-12-04T10:14:56.9648509Z PREVIOUS_DOCKER_TAG=$(git rev-parse "${MERGE_BASE}:${DOCKER_BUILD_DIR}") 2025-12-04T10:14:56.9649310Z # If no image exists but the hash is the same as the previous hash then we should error out here 2025-12-04T10:14:56.9650023Z if [[ "${PREVIOUS_DOCKER_TAG}" == "${DOCKER_TAG}" ]]; then 2025-12-04T10:14:56.9651171Z  echo "WARNING: Something has gone wrong and the previous image isn't available for the merge-base of your branch" 2025-12-04T10:14:56.9652072Z  echo " Will re-build docker image to store in local cache, TTS may be longer" 2025-12-04T10:14:56.9652626Z fi 2025-12-04T10:14:56.9652906Z  2025-12-04T10:14:56.9653256Z echo "rebuild=true" >> "${GITHUB_OUTPUT}" 2025-12-04T10:14:56.9662906Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-12-04T10:14:56.9663913Z env: 2025-12-04T10:14:56.9664213Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:14:56.9664658Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T10:14:56.9665320Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T10:14:56.9665864Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T10:14:56.9667522Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T10:14:56.9669137Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T10:14:56.9669518Z AWS_REGION: us-east-1 2025-12-04T10:14:56.9670045Z AWS_ACCESS_KEY_ID: *** 2025-12-04T10:14:56.9670544Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T10:14:56.9677686Z AWS_SESSION_TOKEN: *** 2025-12-04T10:14:56.9678055Z DOCKER_BUILD_DIR: .ci/docker 2025-12-04T10:14:56.9678510Z BASE_REVISION: ffd9b0fb4355e97af82fc42cf185c3ffa0fc0a32 2025-12-04T10:14:56.9679538Z DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-rocm-n-py3-f0cd68561080d537ef3d3d6f81b25a6416ad600a 2025-12-04T10:14:56.9680784Z DOCKER_TAG: pytorch-linux-jammy-rocm-n-py3-f0cd68561080d537ef3d3d6f81b25a6416ad600a 2025-12-04T10:14:56.9681531Z DOCKER_REGISTRY: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-12-04T10:14:56.9682015Z DOCKER_PUSH: 2025-12-04T10:14:56.9682343Z ##[endgroup] 2025-12-04T10:14:56.9709320Z + retry login 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-12-04T10:14:56.9709979Z + login 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-12-04T10:14:56.9710567Z + aws ecr get-login-password --region us-east-1 2025-12-04T10:14:56.9711300Z + docker login -u AWS --password-stdin 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-12-04T10:14:56.9712223Z /home/runner/_work/_temp/31870a72-d959-49b4-88f5-252f3b677047.sh: line 5: aws: command not found 2025-12-04T10:14:56.9809932Z Error: Cannot perform an interactive login from a non TTY device 2025-12-04T10:14:56.9823823Z + sleep 1 2025-12-04T10:14:57.9842192Z + login 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-12-04T10:14:57.9848054Z + aws ecr get-login-password --region us-east-1 2025-12-04T10:14:57.9848850Z /home/runner/_work/_temp/31870a72-d959-49b4-88f5-252f3b677047.sh: line 5: aws: command not found 2025-12-04T10:14:57.9849739Z + docker login -u AWS --password-stdin 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-12-04T10:14:57.9953067Z Error: Cannot perform an interactive login from a non TTY device 2025-12-04T10:14:57.9965231Z + sleep 2 2025-12-04T10:14:59.9981964Z + login 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-12-04T10:14:59.9986915Z + aws ecr get-login-password --region us-east-1 2025-12-04T10:14:59.9987651Z + docker login -u AWS --password-stdin 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-12-04T10:14:59.9988582Z /home/runner/_work/_temp/31870a72-d959-49b4-88f5-252f3b677047.sh: line 5: aws: command not found 2025-12-04T10:15:00.0065308Z Error: Cannot perform an interactive login from a non TTY device 2025-12-04T10:15:00.0083649Z ++ date +%s 2025-12-04T10:15:00.0099250Z + START_TIME=1764843300 2025-12-04T10:15:00.0104406Z ++ date +%s 2025-12-04T10:15:00.0113498Z + [[ 1764836100 -lt 1764843300 ]] 2025-12-04T10:15:00.0114547Z + docker manifest inspect 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-rocm-n-py3-f0cd68561080d537ef3d3d6f81b25a6416ad600a 2025-12-04T10:15:01.3936800Z { 2025-12-04T10:15:01.3937140Z "schemaVersion": 2, 2025-12-04T10:15:01.3937733Z "mediaType": "application/vnd.docker.distribution.manifest.v2+json", 2025-12-04T10:15:01.3938306Z "config": { 2025-12-04T10:15:01.3938754Z "mediaType": "application/vnd.docker.container.image.v1+json", 2025-12-04T10:15:01.3939268Z "size": 30520, 2025-12-04T10:15:01.3939788Z "digest": "sha256:45252333063339f104d56e41f20304e9511ab21c7768e8d156b95ddf24a9dbe5" 2025-12-04T10:15:01.3941034Z }, 2025-12-04T10:15:01.3941312Z "layers": [ 2025-12-04T10:15:01.3941602Z { 2025-12-04T10:15:01.3942033Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.3942757Z "size": 30447951, 2025-12-04T10:15:01.3943307Z "digest": "sha256:63e5bc7682b85ae57a1221210f64d62e7a90b0a30f19af4ca734b8242ae49d63" 2025-12-04T10:15:01.3943888Z }, 2025-12-04T10:15:01.3944146Z { 2025-12-04T10:15:01.3944562Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.3945069Z "size": 1554, 2025-12-04T10:15:01.3945583Z "digest": "sha256:835841cca3b7e1464290cdb78e48773e03583413fbed852c3cc5165a392ea44d" 2025-12-04T10:15:01.3946142Z }, 2025-12-04T10:15:01.3946388Z { 2025-12-04T10:15:01.3946795Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.3947302Z "size": 313275691, 2025-12-04T10:15:01.3947858Z "digest": "sha256:aac69780afc8611a5f94a235792d39ae055249c8319ef43b78675998a9b2f825" 2025-12-04T10:15:01.3948419Z }, 2025-12-04T10:15:01.3948665Z { 2025-12-04T10:15:01.3949067Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.3949576Z "size": 704, 2025-12-04T10:15:01.3950092Z "digest": "sha256:029495b23122c840ca0e52d487afa8d2c4dbf1991cd7f204ec3e434dcf947bf4" 2025-12-04T10:15:01.3950732Z }, 2025-12-04T10:15:01.3950979Z { 2025-12-04T10:15:01.3951387Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.3951887Z "size": 1218, 2025-12-04T10:15:01.3952406Z "digest": "sha256:d0fb85b008332051a3f7c052721ef68bde404b46c23fa43ad040373bd367826c" 2025-12-04T10:15:01.3952973Z }, 2025-12-04T10:15:01.3953216Z { 2025-12-04T10:15:01.3953617Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.3954113Z "size": 484, 2025-12-04T10:15:01.3954625Z "digest": "sha256:59b63930883363c7d2aaab27cc61555d9f3e119dc18247a8624c98ebdaa354a5" 2025-12-04T10:15:01.3955183Z }, 2025-12-04T10:15:01.3955437Z { 2025-12-04T10:15:01.3955842Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.3956347Z "size": 110363202, 2025-12-04T10:15:01.3956894Z "digest": "sha256:dc112c89d57aa1e85082e40a56e5bc743d64f834ae2f98afe91f60c248354d38" 2025-12-04T10:15:01.3957463Z }, 2025-12-04T10:15:01.3957707Z { 2025-12-04T10:15:01.3958112Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.3958610Z "size": 4436, 2025-12-04T10:15:01.3959122Z "digest": "sha256:522eab2402e5001810155ef7eb56940b7c01a4fef62ac588886981c3b8ee8e1e" 2025-12-04T10:15:01.3959679Z }, 2025-12-04T10:15:01.3959923Z { 2025-12-04T10:15:01.3960327Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.3960889Z "size": 1755, 2025-12-04T10:15:01.3961394Z "digest": "sha256:2b5a11b41761d8ea3b829e4772e4064cb6c4e4989126af324d0057661e4493a1" 2025-12-04T10:15:01.3961947Z }, 2025-12-04T10:15:01.3962191Z { 2025-12-04T10:15:01.3962601Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.3963099Z "size": 724, 2025-12-04T10:15:01.3963963Z "digest": "sha256:9681563a88ff9e62494a2740e537440d3df978d466c9478d6a941fae8b57b084" 2025-12-04T10:15:01.3964591Z }, 2025-12-04T10:15:01.3964842Z { 2025-12-04T10:15:01.3965250Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.3965768Z "size": 3185588166, 2025-12-04T10:15:01.3966326Z "digest": "sha256:73e33534e9eb94cf29418d65944168962b65fe21f55e9b8bad18c76e9b3a37b8" 2025-12-04T10:15:01.3966893Z }, 2025-12-04T10:15:01.3967133Z { 2025-12-04T10:15:01.3967545Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.3968048Z "size": 396, 2025-12-04T10:15:01.3968579Z "digest": "sha256:5bfdaeb5578d6ffcd7db29c48303cbceb13c591210feaa216a8daa7a6d445b4b" 2025-12-04T10:15:01.3969159Z }, 2025-12-04T10:15:01.3969407Z { 2025-12-04T10:15:01.3969969Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.3970477Z "size": 236863, 2025-12-04T10:15:01.3971083Z "digest": "sha256:c07d27e4d3a5ba4ad5325bb785b2e4f058fe5e10ec1aeeb413a1e152b073f203" 2025-12-04T10:15:01.3971753Z }, 2025-12-04T10:15:01.3972005Z { 2025-12-04T10:15:01.3972417Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.3972923Z "size": 787, 2025-12-04T10:15:01.3973446Z "digest": "sha256:b21856d1bf420da6fa8ec7331b82ab355d4f4178644e7d3a3d3d0fbc3610109a" 2025-12-04T10:15:01.3974017Z }, 2025-12-04T10:15:01.3974265Z { 2025-12-04T10:15:01.3974672Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.3975170Z "size": 106, 2025-12-04T10:15:01.3975680Z "digest": "sha256:cb19d84867e4063f55db9459c28c50a2abc37c06d3c1ca82ba95fa8427cc438a" 2025-12-04T10:15:01.3976240Z }, 2025-12-04T10:15:01.3976484Z { 2025-12-04T10:15:01.3976895Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.3977391Z "size": 1496, 2025-12-04T10:15:01.3977901Z "digest": "sha256:8165374f8dccf88a7791a5d31afbe29e4d4542b4f1cf1904945e07f9af6bf8ba" 2025-12-04T10:15:01.3978474Z }, 2025-12-04T10:15:01.3978720Z { 2025-12-04T10:15:01.3979126Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.3979630Z "size": 458789560, 2025-12-04T10:15:01.3980172Z "digest": "sha256:1aecc77354ceba59ec6f0d37a558f2dbb6d5c0854553ee8505ac8707b422da6d" 2025-12-04T10:15:01.3980794Z }, 2025-12-04T10:15:01.3981042Z { 2025-12-04T10:15:01.3981449Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.3981948Z "size": 164, 2025-12-04T10:15:01.3982468Z "digest": "sha256:465d3fd643aa2ea0ad07335cda66f12f1d7e5e800c4e9385ec466bc8a1ceabda" 2025-12-04T10:15:01.3983035Z }, 2025-12-04T10:15:01.3983284Z { 2025-12-04T10:15:01.3983690Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.3984200Z "size": 104, 2025-12-04T10:15:01.3984709Z "digest": "sha256:6c503e779d6f41ca7f51309875df2b725c171926aece7009c4b8a64d1ba3f58e" 2025-12-04T10:15:01.3985275Z }, 2025-12-04T10:15:01.3985533Z { 2025-12-04T10:15:01.3985939Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.3986437Z "size": 724, 2025-12-04T10:15:01.3986935Z "digest": "sha256:9681563a88ff9e62494a2740e537440d3df978d466c9478d6a941fae8b57b084" 2025-12-04T10:15:01.3987486Z }, 2025-12-04T10:15:01.3987736Z { 2025-12-04T10:15:01.3988145Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.3988646Z "size": 196, 2025-12-04T10:15:01.3989155Z "digest": "sha256:f7e9a021f0ee3d11a50dcb96378af8103a21f6c3c142f54529207648f3ed00b2" 2025-12-04T10:15:01.3989719Z }, 2025-12-04T10:15:01.3989969Z { 2025-12-04T10:15:01.3990378Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.3990933Z "size": 2583, 2025-12-04T10:15:01.3991451Z "digest": "sha256:8e023b349080fb11ee55491bc9b842b30e9e3a90246d05b303a73dc62038caf2" 2025-12-04T10:15:01.3992008Z }, 2025-12-04T10:15:01.3992257Z { 2025-12-04T10:15:01.3992671Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.3993178Z "size": 7577171420, 2025-12-04T10:15:01.3993706Z "digest": "sha256:8188df80e595a3dbcf84623c6a58a655269898cbb60029435f136d7f9d34ccaa" 2025-12-04T10:15:01.3994261Z }, 2025-12-04T10:15:01.3994510Z { 2025-12-04T10:15:01.3994919Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.3995420Z "size": 135, 2025-12-04T10:15:01.3995944Z "digest": "sha256:3c2c2f8c74bfa16c4bf9a832c97bbb1d55205b2b4a2cead02cf74301ca1001fb" 2025-12-04T10:15:01.3996520Z }, 2025-12-04T10:15:01.3996772Z { 2025-12-04T10:15:01.3997201Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.3997704Z "size": 104, 2025-12-04T10:15:01.3998346Z "digest": "sha256:2aa7784fbe3300f8bbfb6bb51cff3b01fd091e829c2bc7ab9e25261a0dd9b3bd" 2025-12-04T10:15:01.3998927Z }, 2025-12-04T10:15:01.3999179Z { 2025-12-04T10:15:01.3999585Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4000183Z "size": 612, 2025-12-04T10:15:01.4000765Z "digest": "sha256:2b3b5215d3ebe8789f0444457bfd5a6e218289b64aa07653ac3d03ddda5e6708" 2025-12-04T10:15:01.4001329Z }, 2025-12-04T10:15:01.4001579Z { 2025-12-04T10:15:01.4001986Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4002493Z "size": 838191945, 2025-12-04T10:15:01.4003034Z "digest": "sha256:99b1f1ea3e857834cebd01763d90fbd700aeb9c2d2ef23eda2cfff5652c9708b" 2025-12-04T10:15:01.4003604Z }, 2025-12-04T10:15:01.4003851Z { 2025-12-04T10:15:01.4004258Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4004756Z "size": 111, 2025-12-04T10:15:01.4005281Z "digest": "sha256:18d6daba0a5768a37ad106b57974f6b7efd35c43a87c246bcd3f43fea88f2d2b" 2025-12-04T10:15:01.4005850Z }, 2025-12-04T10:15:01.4006100Z { 2025-12-04T10:15:01.4006503Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4007045Z "size": 1555, 2025-12-04T10:15:01.4007563Z "digest": "sha256:5277f2a503ebd17ba9d9b86cc9bac86265504adeb449c0647616ddaacd3cbc41" 2025-12-04T10:15:01.4008134Z }, 2025-12-04T10:15:01.4008383Z { 2025-12-04T10:15:01.4008787Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4009287Z "size": 107, 2025-12-04T10:15:01.4009797Z "digest": "sha256:3198a9717aace920fd5de085319adf75091af05fc4318ce4b16a8a5b0e8d449e" 2025-12-04T10:15:01.4010362Z }, 2025-12-04T10:15:01.4010666Z { 2025-12-04T10:15:01.4011074Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4011575Z "size": 166, 2025-12-04T10:15:01.4012082Z "digest": "sha256:99a4918e5808277879449e97ccd7190db6b9aa2d742b57a3b831ce0198522bdd" 2025-12-04T10:15:01.4012635Z }, 2025-12-04T10:15:01.4012885Z { 2025-12-04T10:15:01.4013295Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4013804Z "size": 3526081, 2025-12-04T10:15:01.4014330Z "digest": "sha256:15bb11dfc6acc3537d527d6771c8e711e5605e99f82ec41e805d4600b8a97516" 2025-12-04T10:15:01.4014893Z }, 2025-12-04T10:15:01.4015143Z { 2025-12-04T10:15:01.4015550Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4016052Z "size": 107, 2025-12-04T10:15:01.4016566Z "digest": "sha256:bd87c8766e90e33db17514558ac591cc3f4149afd7abeaef4dd5770bbfa14210" 2025-12-04T10:15:01.4017133Z }, 2025-12-04T10:15:01.4017382Z { 2025-12-04T10:15:01.4017785Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4018286Z "size": 829, 2025-12-04T10:15:01.4018793Z "digest": "sha256:1969e15d0c13874ea5883ed829235a19ef6dc21c8aa6172032b78a8ffa6ff262" 2025-12-04T10:15:01.4019348Z }, 2025-12-04T10:15:01.4019595Z { 2025-12-04T10:15:01.4020005Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4020513Z "size": 26973054, 2025-12-04T10:15:01.4021101Z "digest": "sha256:24a03847d382b73c11969f8f73916a6bedf5ccea12f6f4290b3880f29ceda32a" 2025-12-04T10:15:01.4021665Z }, 2025-12-04T10:15:01.4021914Z { 2025-12-04T10:15:01.4022319Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4022823Z "size": 104, 2025-12-04T10:15:01.4023344Z "digest": "sha256:816e2e34e01839a35d624dbf4bd9ac9bea4c975104af47a0e6b6b6dee6c6f98d" 2025-12-04T10:15:01.4023915Z }, 2025-12-04T10:15:01.4024162Z { 2025-12-04T10:15:01.4024568Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4025067Z "size": 424, 2025-12-04T10:15:01.4025579Z "digest": "sha256:b168858b85373f8ddca549d79267a06de4fa945d04bf791c55c9ddc93957fa3c" 2025-12-04T10:15:01.4026138Z }, 2025-12-04T10:15:01.4026393Z { 2025-12-04T10:15:01.4026911Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4027420Z "size": 19309386, 2025-12-04T10:15:01.4027953Z "digest": "sha256:6b8d5ff02e267e38322afbb8a58ed63ce9d75b10e9e73255e6affcbc6b6539bf" 2025-12-04T10:15:01.4028655Z }, 2025-12-04T10:15:01.4028903Z { 2025-12-04T10:15:01.4029311Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4029814Z "size": 826, 2025-12-04T10:15:01.4030331Z "digest": "sha256:4e3b10a5dd6aed29f238d604925e2a4f873141c1087c8dd4fdde5c61e7560893" 2025-12-04T10:15:01.4030969Z }, 2025-12-04T10:15:01.4031285Z + exit 0 2025-12-04T10:15:01.4031549Z { 2025-12-04T10:15:01.4031948Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4032449Z "size": 724, 2025-12-04T10:15:01.4032950Z "digest": "sha256:9681563a88ff9e62494a2740e537440d3df978d466c9478d6a941fae8b57b084" 2025-12-04T10:15:01.4033502Z }, 2025-12-04T10:15:01.4033752Z { 2025-12-04T10:15:01.4034169Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4034672Z "size": 149, 2025-12-04T10:15:01.4035186Z "digest": "sha256:3092fab73b59190b9facfc49bf18f58612172bc2fd68dfa339a1118632616939" 2025-12-04T10:15:01.4035758Z }, 2025-12-04T10:15:01.4036007Z { 2025-12-04T10:15:01.4036414Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4036915Z "size": 136, 2025-12-04T10:15:01.4037439Z "digest": "sha256:20020dd28a15ba092fcbfe906ee39cdddfcc9d0b7eb42fdd6f4c08a984fa9c00" 2025-12-04T10:15:01.4038014Z }, 2025-12-04T10:15:01.4038262Z { 2025-12-04T10:15:01.4038666Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4039166Z "size": 140, 2025-12-04T10:15:01.4039678Z "digest": "sha256:ae5280ce969dcff08c091e9a5f7641f13561b2b0ee44d78b7c3f81d8fe8e6d32" 2025-12-04T10:15:01.4040244Z }, 2025-12-04T10:15:01.4040495Z { 2025-12-04T10:15:01.4040958Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4041469Z "size": 32, 2025-12-04T10:15:01.4041991Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-12-04T10:15:01.4042572Z }, 2025-12-04T10:15:01.4042821Z { 2025-12-04T10:15:01.4043225Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4043726Z "size": 222, 2025-12-04T10:15:01.4044243Z "digest": "sha256:fe17d9eb0fd26d3af4c724bf570d833978b131cedb7dc17a800aa388a246b3cd" 2025-12-04T10:15:01.4044814Z }, 2025-12-04T10:15:01.4045063Z { 2025-12-04T10:15:01.4045472Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4045979Z "size": 346, 2025-12-04T10:15:01.4046482Z "digest": "sha256:a51e0dab2d596e6563483f27c12660007160847d177ba4c31812a8f44ada5754" 2025-12-04T10:15:01.4047034Z }, 2025-12-04T10:15:01.4047280Z { 2025-12-04T10:15:01.4047685Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4048189Z "size": 88300, 2025-12-04T10:15:01.4048724Z "digest": "sha256:6eb176cefd72d37ecbcdf074289a8f1de732d8816cc695ece7e4709d098094d6" 2025-12-04T10:15:01.4049300Z }, 2025-12-04T10:15:01.4049559Z { 2025-12-04T10:15:01.4049961Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4050460Z "size": 106, 2025-12-04T10:15:01.4051012Z "digest": "sha256:e7b8cf2e8d5a4c56db9726ce62c1176032408b3b1c25a000592361cb4245e2b5" 2025-12-04T10:15:01.4051574Z }, 2025-12-04T10:15:01.4051823Z { 2025-12-04T10:15:01.4052227Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4052729Z "size": 1671, 2025-12-04T10:15:01.4053252Z "digest": "sha256:ef3a5060abce88884bc8bd815aa41c46427f34eeb132fe0ddd85a3f86e6dc83d" 2025-12-04T10:15:01.4053824Z }, 2025-12-04T10:15:01.4054071Z { 2025-12-04T10:15:01.4054474Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4054975Z "size": 724, 2025-12-04T10:15:01.4055607Z "digest": "sha256:9681563a88ff9e62494a2740e537440d3df978d466c9478d6a941fae8b57b084" 2025-12-04T10:15:01.4056162Z }, 2025-12-04T10:15:01.4056412Z { 2025-12-04T10:15:01.4056820Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4057419Z "size": 138, 2025-12-04T10:15:01.4057942Z "digest": "sha256:a6f4ec14b42b8f0a83d20aa6a985ddb6a1bf64e0ed3d44afd3484b87d4ed5ad3" 2025-12-04T10:15:01.4058520Z }, 2025-12-04T10:15:01.4058768Z { 2025-12-04T10:15:01.4059179Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4059680Z "size": 119, 2025-12-04T10:15:01.4060196Z "digest": "sha256:7e5a0c956cfbd6f8074fbfd3b1d416e6635d632835ec00c8dd4c015a21da19b4" 2025-12-04T10:15:01.4060816Z }, 2025-12-04T10:15:01.4061063Z { 2025-12-04T10:15:01.4061473Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4061978Z "size": 6238423049, 2025-12-04T10:15:01.4062524Z "digest": "sha256:b4f78730cfe76ce091b78b2e2e3d52be03f1097b3e4c3de5bd79f8d13a853132" 2025-12-04T10:15:01.4063095Z }, 2025-12-04T10:15:01.4063345Z { 2025-12-04T10:15:01.4063749Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4064258Z "size": 174, 2025-12-04T10:15:01.4064756Z "digest": "sha256:081028f24389b112683689fd362e8c0d6f358082710e72feab91cea6383feb4d" 2025-12-04T10:15:01.4065303Z }, 2025-12-04T10:15:01.4065549Z { 2025-12-04T10:15:01.4065953Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4066455Z "size": 1896, 2025-12-04T10:15:01.4066986Z "digest": "sha256:a534dcf4b9a9e5fabed742c8a8fc43c9cfe7346ea88ab3c177c3b14fd3afe00a" 2025-12-04T10:15:01.4067566Z }, 2025-12-04T10:15:01.4067815Z { 2025-12-04T10:15:01.4068219Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4068723Z "size": 197577597, 2025-12-04T10:15:01.4069243Z "digest": "sha256:2e77500302cc13224427e1d74e471bd79d5109ba6a5099a83df1d10b786f71ba" 2025-12-04T10:15:01.4069803Z }, 2025-12-04T10:15:01.4070053Z { 2025-12-04T10:15:01.4070460Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4071097Z "size": 304, 2025-12-04T10:15:01.4071622Z "digest": "sha256:bc08246bb4ba18c3ec5bc69e16b6b4e929c5bd0f3fae10eeb0b1a622a63d6fa2" 2025-12-04T10:15:01.4072196Z }, 2025-12-04T10:15:01.4072445Z { 2025-12-04T10:15:01.4072852Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4073350Z "size": 32, 2025-12-04T10:15:01.4073867Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-12-04T10:15:01.4074435Z }, 2025-12-04T10:15:01.4074682Z { 2025-12-04T10:15:01.4075088Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4075590Z "size": 106, 2025-12-04T10:15:01.4076104Z "digest": "sha256:ff0c473ca120ebdcaa2ba10b3274e82032edd5196019e76d4e7584553704ae81" 2025-12-04T10:15:01.4076674Z }, 2025-12-04T10:15:01.4076926Z { 2025-12-04T10:15:01.4077325Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-12-04T10:15:01.4077827Z "size": 54145662, 2025-12-04T10:15:01.4078375Z "digest": "sha256:6bbc14b250efb3cdaad12c91573c6bb9129ad3e3432f0ed1a7eaebc9958d162f" 2025-12-04T10:15:01.4078950Z } 2025-12-04T10:15:01.4079199Z ] 2025-12-04T10:15:01.4079454Z } 2025-12-04T10:15:01.4118697Z ##[group]Run set -eux 2025-12-04T10:15:01.4119085Z set -eux 2025-12-04T10:15:01.4119628Z # It's ok if this steps fails, it would then be an anonymous user like what we used to have 2025-12-04T10:15:01.4121089Z aws secretsmanager get-secret-value --secret-id docker_hub_readonly_token | jq --raw-output '.SecretString' | jq -r .docker_hub_readonly_token | docker login --username pytorchbot --password-stdin || true 2025-12-04T10:15:01.4131985Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-12-04T10:15:01.4132472Z env: 2025-12-04T10:15:01.4132772Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:15:01.4133356Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T10:15:01.4133945Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T10:15:01.4134592Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T10:15:01.4136278Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T10:15:01.4137911Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T10:15:01.4138290Z AWS_REGION: us-east-1 2025-12-04T10:15:01.4138821Z AWS_ACCESS_KEY_ID: *** 2025-12-04T10:15:01.4139318Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T10:15:01.4146692Z AWS_SESSION_TOKEN: *** 2025-12-04T10:15:01.4147036Z ##[endgroup] 2025-12-04T10:15:01.4192262Z + aws secretsmanager get-secret-value --secret-id docker_hub_readonly_token 2025-12-04T10:15:01.4193159Z /home/runner/_work/_temp/20ae4615-757c-4d1a-914b-bb32d48223ea.sh: line 3: aws: command not found 2025-12-04T10:15:01.4193862Z + jq --raw-output .SecretString 2025-12-04T10:15:01.4194279Z + jq -r .docker_hub_readonly_token 2025-12-04T10:15:01.4194779Z + docker login --username pytorchbot --password-stdin 2025-12-04T10:15:01.4305457Z Error: Cannot perform an interactive login from a non TTY device 2025-12-04T10:15:01.4313478Z + true 2025-12-04T10:15:01.4467776Z ##[group]Run pytorch/test-infra/.github/actions/pull-docker-image@main 2025-12-04T10:15:01.4468334Z with: 2025-12-04T10:15:01.4469194Z docker-image: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-rocm-n-py3-f0cd68561080d537ef3d3d6f81b25a6416ad600a 2025-12-04T10:15:01.4470240Z docker-registry: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-12-04T10:15:01.4471094Z env: 2025-12-04T10:15:01.4471394Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:15:01.4471839Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T10:15:01.4472409Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T10:15:01.4472949Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T10:15:01.4474692Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T10:15:01.4476289Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T10:15:01.4476665Z AWS_REGION: us-east-1 2025-12-04T10:15:01.4477187Z AWS_ACCESS_KEY_ID: *** 2025-12-04T10:15:01.4477685Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T10:15:01.4484717Z AWS_SESSION_TOKEN: *** 2025-12-04T10:15:01.4485057Z ##[endgroup] 2025-12-04T10:15:01.4504237Z ##[group]Run set -x 2025-12-04T10:15:01.4504611Z set -x 2025-12-04T10:15:01.4504907Z set +e 2025-12-04T10:15:01.4505191Z  2025-12-04T10:15:01.4505476Z login() { 2025-12-04T10:15:01.4506096Z  aws ecr get-login-password --region us-east-1 | docker login -u AWS --password-stdin "$1" 2025-12-04T10:15:01.4506738Z } 2025-12-04T10:15:01.4507016Z  2025-12-04T10:15:01.4507290Z retry () { 2025-12-04T10:15:01.4507656Z  $* || (sleep 1 && $*) || (sleep 2 && $*) 2025-12-04T10:15:01.4508070Z } 2025-12-04T10:15:01.4508344Z  2025-12-04T10:15:01.4508656Z retry login "${DOCKER_REGISTRY}" 2025-12-04T10:15:01.4509041Z  2025-12-04T10:15:01.4509644Z IMAGE_SIZE=$(docker manifest inspect "${DOCKER_IMAGE}" | jq '[.layers[].size, .config.size] | add / 1024 / 1024') 2025-12-04T10:15:01.4510455Z echo "Compressed size of image in MB: ${IMAGE_SIZE}" 2025-12-04T10:15:01.4510979Z  2025-12-04T10:15:01.4511255Z set -e 2025-12-04T10:15:01.4511693Z # ignore output since only exit code is used for conditional 2025-12-04T10:15:01.4512298Z # only pull docker image if it's not available locally 2025-12-04T10:15:01.4512968Z if ! docker inspect --type=image "${DOCKER_IMAGE}" >/dev/null 2>/dev/null; then 2025-12-04T10:15:01.4513771Z  retry docker pull "${DOCKER_IMAGE}" 2025-12-04T10:15:01.4514176Z fi 2025-12-04T10:15:01.4523819Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-12-04T10:15:01.4524296Z env: 2025-12-04T10:15:01.4524595Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:15:01.4525034Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T10:15:01.4525601Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T10:15:01.4526138Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T10:15:01.4527794Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T10:15:01.4529409Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T10:15:01.4529796Z AWS_REGION: us-east-1 2025-12-04T10:15:01.4530225Z AWS_ACCESS_KEY_ID: *** 2025-12-04T10:15:01.4530771Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T10:15:01.4537813Z AWS_SESSION_TOKEN: *** 2025-12-04T10:15:01.4538885Z DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-rocm-n-py3-f0cd68561080d537ef3d3d6f81b25a6416ad600a 2025-12-04T10:15:01.4539923Z DOCKER_REGISTRY: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-12-04T10:15:01.4540417Z ##[endgroup] 2025-12-04T10:15:01.4570985Z + set +e 2025-12-04T10:15:01.4571410Z + retry login 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-12-04T10:15:01.4571972Z + login 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-12-04T10:15:01.4574425Z + aws ecr get-login-password --region us-east-1 2025-12-04T10:15:01.4575167Z /home/runner/_work/_temp/80e1bcdb-99a1-4241-b5c2-5dcf5d5a35d8.sh: line 5: aws: command not found 2025-12-04T10:15:01.4576776Z + docker login -u AWS --password-stdin 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-12-04T10:15:01.4655950Z Error: Cannot perform an interactive login from a non TTY device 2025-12-04T10:15:01.4666543Z + sleep 1 2025-12-04T10:15:02.4677861Z + login 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-12-04T10:15:02.4682508Z + aws ecr get-login-password --region us-east-1 2025-12-04T10:15:02.4683323Z /home/runner/_work/_temp/80e1bcdb-99a1-4241-b5c2-5dcf5d5a35d8.sh: line 5: aws: command not found 2025-12-04T10:15:02.4684212Z + docker login -u AWS --password-stdin 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-12-04T10:15:02.4779892Z Error: Cannot perform an interactive login from a non TTY device 2025-12-04T10:15:02.4794075Z + sleep 2 2025-12-04T10:15:04.4808517Z + login 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-12-04T10:15:04.4813669Z + aws ecr get-login-password --region us-east-1 2025-12-04T10:15:04.4814489Z /home/runner/_work/_temp/80e1bcdb-99a1-4241-b5c2-5dcf5d5a35d8.sh: line 5: aws: command not found 2025-12-04T10:15:04.4815450Z + docker login -u AWS --password-stdin 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-12-04T10:15:04.4900084Z Error: Cannot perform an interactive login from a non TTY device 2025-12-04T10:15:04.4922048Z ++ jq '[.layers[].size, .config.size] | add / 1024 / 1024' 2025-12-04T10:15:04.4923211Z ++ docker manifest inspect 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-rocm-n-py3-f0cd68561080d537ef3d3d6f81b25a6416ad600a 2025-12-04T10:15:05.8513418Z + IMAGE_SIZE=18171.470620155334 2025-12-04T10:15:05.8514026Z + echo 'Compressed size of image in MB: 18171.470620155334' 2025-12-04T10:15:05.8514520Z + set -e 2025-12-04T10:15:05.8515471Z + docker inspect --type=image 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-rocm-n-py3-f0cd68561080d537ef3d3d6f81b25a6416ad600a 2025-12-04T10:15:05.8516528Z Compressed size of image in MB: 18171.470620155334 2025-12-04T10:15:05.8742701Z Prepare all required actions 2025-12-04T10:15:05.8784543Z ##[group]Run ./.github/actions/get-workflow-job-id 2025-12-04T10:15:05.8785158Z with: 2025-12-04T10:15:05.8785780Z github-token: *** 2025-12-04T10:15:05.8786090Z env: 2025-12-04T10:15:05.8786388Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:15:05.8786822Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T10:15:05.8787399Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T10:15:05.8787942Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T10:15:05.8789616Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T10:15:05.8791300Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T10:15:05.8791728Z AWS_REGION: us-east-1 2025-12-04T10:15:05.8792167Z AWS_ACCESS_KEY_ID: *** 2025-12-04T10:15:05.8792651Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T10:15:05.8799878Z AWS_SESSION_TOKEN: *** 2025-12-04T10:15:05.8800213Z ##[endgroup] 2025-12-04T10:15:05.8818436Z ##[group]Run set -eux 2025-12-04T10:15:05.8818789Z set -eux 2025-12-04T10:15:05.8819335Z python3 .github/scripts/get_workflow_job_id.py "${GITHUB_RUN_ID}" "${RUNNER_NAME}" 2025-12-04T10:15:05.8828855Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-12-04T10:15:05.8829323Z env: 2025-12-04T10:15:05.8829621Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:15:05.8830046Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T10:15:05.8830708Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T10:15:05.8831245Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T10:15:05.8832897Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T10:15:05.8834511Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T10:15:05.8834891Z AWS_REGION: us-east-1 2025-12-04T10:15:05.8835316Z AWS_ACCESS_KEY_ID: *** 2025-12-04T10:15:05.8835799Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T10:15:05.8843004Z AWS_SESSION_TOKEN: *** 2025-12-04T10:15:05.8843497Z GITHUB_TOKEN: *** 2025-12-04T10:15:05.8843803Z ##[endgroup] 2025-12-04T10:15:05.8872388Z + python3 .github/scripts/get_workflow_job_id.py 19922849170 linux.rocm.gpu.gfx942.4.b-bphpw-runner-mcn25 2025-12-04T10:15:07.1809678Z Setting output job-id=57116213174 2025-12-04T10:15:07.1810597Z Setting output job-name=linux-jammy-rocm-py3.10 / test (distributed, 1, 3, linux.rocm.gpu.gfx942.4.b, mem_leak_check, unstable) 2025-12-04T10:15:07.2045080Z Prepare all required actions 2025-12-04T10:15:07.2045637Z Getting action download info 2025-12-04T10:15:07.4371409Z Download action repository 'seemethere/download-artifact-s3@v4' (SHA:1da556a7aa0a088e3153970611f6c432d58e80e6) 2025-12-04T10:15:08.5460478Z Download action repository 'actions/download-artifact@v4' (SHA:d3f86a106a0bac45b974a628896c90dbdf5c8093) 2025-12-04T10:15:09.6211420Z ##[group]Run ./.github/actions/download-build-artifacts 2025-12-04T10:15:09.6211579Z with: 2025-12-04T10:15:09.6211681Z name: linux-jammy-rocm-py3.10 2025-12-04T10:15:09.6211804Z s3-bucket: gha-artifacts 2025-12-04T10:15:09.6211909Z env: 2025-12-04T10:15:09.6212005Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:15:09.6212139Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T10:15:09.6212317Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T10:15:09.6212482Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T10:15:09.6213024Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T10:15:09.6213622Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T10:15:09.6213736Z AWS_REGION: us-east-1 2025-12-04T10:15:09.6213907Z AWS_ACCESS_KEY_ID: *** 2025-12-04T10:15:09.6214055Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T10:15:09.6216180Z AWS_SESSION_TOKEN: *** 2025-12-04T10:15:09.6216286Z ##[endgroup] 2025-12-04T10:15:09.6269001Z ##[group]Run seemethere/download-artifact-s3@v4 2025-12-04T10:15:09.6269278Z with: 2025-12-04T10:15:09.6290149Z name: linux-jammy-rocm-py3.10 2025-12-04T10:15:09.6290393Z s3-bucket: gha-artifacts 2025-12-04T10:15:09.6290671Z region: us-east-1 2025-12-04T10:15:09.6290868Z env: 2025-12-04T10:15:09.6291076Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:15:09.6291359Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T10:15:09.6291719Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T10:15:09.6292055Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T10:15:09.6293100Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T10:15:09.6294118Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T10:15:09.6294351Z AWS_REGION: us-east-1 2025-12-04T10:15:09.6294640Z AWS_ACCESS_KEY_ID: *** 2025-12-04T10:15:09.6294943Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T10:15:09.6299322Z AWS_SESSION_TOKEN: *** 2025-12-04T10:15:09.6299532Z ##[endgroup] 2025-12-04T10:15:09.8650210Z (node:17082) NOTE: We are formalizing our plans to enter AWS SDK for JavaScript (v2) into maintenance mode in 2023. 2025-12-04T10:15:09.8650944Z 2025-12-04T10:15:09.8651213Z Please migrate your code to use AWS SDK for JavaScript (v3). 2025-12-04T10:15:09.8651918Z For more information, check the migration guide at https://a.co/7PzMCcy 2025-12-04T10:15:09.8652640Z (Use `node --trace-warnings ...` to show where the warning was created) 2025-12-04T10:15:10.1397391Z Found 1 objects with prefix pytorch/pytorch/19922849170/linux-jammy-rocm-py3.10/ 2025-12-04T10:15:10.1397956Z Starting download (1/1): /home/runner/_work/pytorch/pytorch/artifacts.zip 2025-12-04T10:15:52.2550463Z Finished download (1/1): /home/runner/_work/pytorch/pytorch/artifacts.zip 2025-12-04T10:15:52.2562116Z Artifact download has finished successfully 2025-12-04T10:15:52.2987406Z ##[group]Run unzip -o artifacts.zip 2025-12-04T10:15:52.2987899Z unzip -o artifacts.zip 2025-12-04T10:15:52.2998455Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-12-04T10:15:52.2998937Z env: 2025-12-04T10:15:52.2999567Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:15:52.3000011Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T10:15:52.3000664Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T10:15:52.3001268Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T10:15:52.3002962Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T10:15:52.3004601Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T10:15:52.3004990Z AWS_REGION: us-east-1 2025-12-04T10:15:52.3005485Z AWS_ACCESS_KEY_ID: *** 2025-12-04T10:15:52.3005987Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T10:15:52.3013244Z AWS_SESSION_TOKEN: *** 2025-12-04T10:15:52.3013589Z ##[endgroup] 2025-12-04T10:15:52.3070110Z Archive: artifacts.zip 2025-12-04T10:15:52.3071624Z creating: dist/ 2025-12-04T10:15:52.3159321Z inflating: dist/.ninja_log 2025-12-04T10:15:55.2394054Z inflating: dist/torch-2.10.0a0+gitffd9b0f-cp310-cp310-linux_x86_64.whl 2025-12-04T10:15:55.2398943Z creating: build/ 2025-12-04T10:15:55.2399497Z creating: build/custom_test_artifacts/ 2025-12-04T10:15:55.2400049Z creating: build/custom_test_artifacts/custom-op-build/ 2025-12-04T10:15:55.2400767Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/ 2025-12-04T10:15:55.2401506Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/pkgRedirects/ 2025-12-04T10:15:55.2402338Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeConfigureLog.yaml 2025-12-04T10:15:55.2403137Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.31.6/ 2025-12-04T10:15:55.2403926Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.31.6/CMakeSystem.cmake 2025-12-04T10:15:55.2404813Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.31.6/CompilerIdC/ 2025-12-04T10:15:55.2405676Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.31.6/CompilerIdC/tmp/ 2025-12-04T10:15:55.2406639Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.31.6/CompilerIdC/CMakeCCompilerId.c 2025-12-04T10:15:55.2407593Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.31.6/CompilerIdC/a.out 2025-12-04T10:15:55.2408479Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.31.6/CMakeCCompiler.cmake 2025-12-04T10:15:55.2409340Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.31.6/CompilerIdCXX/ 2025-12-04T10:15:55.2410173Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.31.6/CompilerIdCXX/tmp/ 2025-12-04T10:15:55.2411233Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.31.6/CompilerIdCXX/CMakeCXXCompilerId.cpp 2025-12-04T10:15:55.2412464Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.31.6/CompilerIdCXX/a.out 2025-12-04T10:15:55.2413380Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.31.6/CMakeCXXCompiler.cmake 2025-12-04T10:15:55.2414371Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.31.6/CMakeDetermineCompilerABI_C.bin 2025-12-04T10:15:55.2415418Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.31.6/CMakeDetermineCompilerABI_CXX.bin 2025-12-04T10:15:55.2416334Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeScratch/ 2025-12-04T10:15:55.2417070Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeTmp/ 2025-12-04T10:15:55.2417837Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/cmake.check_cache 2025-12-04T10:15:55.2418636Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/ 2025-12-04T10:15:55.2420474Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/compiler_depend.ts 2025-12-04T10:15:55.2421518Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/compiler_depend.make 2025-12-04T10:15:55.2422469Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/depend.make 2025-12-04T10:15:55.2423361Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/link.txt 2025-12-04T10:15:55.2424263Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/cmake_clean.cmake 2025-12-04T10:15:55.2425178Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/build.make 2025-12-04T10:15:55.2426091Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/DependInfo.cmake 2025-12-04T10:15:55.2427000Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/flags.make 2025-12-04T10:15:55.2427904Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/progress.make 2025-12-04T10:15:55.2428813Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/op.cpp.o.d 2025-12-04T10:15:55.2523689Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/op.cpp.o 2025-12-04T10:15:55.2524803Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/ 2025-12-04T10:15:55.2525728Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/compiler_depend.ts 2025-12-04T10:15:55.2526762Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/compiler_depend.make 2025-12-04T10:15:55.2527754Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/depend.make 2025-12-04T10:15:55.2528683Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/link.txt 2025-12-04T10:15:55.2529660Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/cmake_clean.cmake 2025-12-04T10:15:55.2530670Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/build.make 2025-12-04T10:15:55.2531636Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/DependInfo.cmake 2025-12-04T10:15:55.2532601Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/flags.make 2025-12-04T10:15:55.2533539Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/progress.make 2025-12-04T10:15:55.2537737Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/test_custom_ops.cpp.o.d 2025-12-04T10:15:55.2581769Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/test_custom_ops.cpp.o 2025-12-04T10:15:55.2582869Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeDirectoryInformation.cmake 2025-12-04T10:15:55.2583835Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/TargetDirectories.txt 2025-12-04T10:15:55.2584702Z extracting: build/custom_test_artifacts/custom-op-build/CMakeFiles/progress.marks 2025-12-04T10:15:55.2585499Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/Makefile2 2025-12-04T10:15:55.2586288Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/Makefile.cmake 2025-12-04T10:15:55.2587089Z inflating: build/custom_test_artifacts/custom-op-build/hipblaslt_test_outer_vec.cc 2025-12-04T10:15:55.2587895Z inflating: build/custom_test_artifacts/custom-op-build/hipblaslt_test_vec_ext.cc 2025-12-04T10:15:55.2588635Z inflating: build/custom_test_artifacts/custom-op-build/CMakeCache.txt 2025-12-04T10:15:55.2589318Z inflating: build/custom_test_artifacts/custom-op-build/Makefile 2025-12-04T10:15:55.2590003Z inflating: build/custom_test_artifacts/custom-op-build/cmake_install.cmake 2025-12-04T10:15:55.2676262Z inflating: build/custom_test_artifacts/custom-op-build/libcustom_ops.so 2025-12-04T10:15:55.2705666Z inflating: build/custom_test_artifacts/custom-op-build/test_custom_ops 2025-12-04T10:15:55.2706358Z creating: build/custom_test_artifacts/jit-hook-build/ 2025-12-04T10:15:55.2706974Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/ 2025-12-04T10:15:55.2707681Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/pkgRedirects/ 2025-12-04T10:15:55.2708512Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeConfigureLog.yaml 2025-12-04T10:15:55.2709298Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.31.6/ 2025-12-04T10:15:55.2710078Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.31.6/CMakeSystem.cmake 2025-12-04T10:15:55.2711000Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.31.6/CompilerIdC/ 2025-12-04T10:15:55.2711808Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.31.6/CompilerIdC/tmp/ 2025-12-04T10:15:55.2712755Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.31.6/CompilerIdC/CMakeCCompilerId.c 2025-12-04T10:15:55.2713691Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.31.6/CompilerIdC/a.out 2025-12-04T10:15:55.2714709Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.31.6/CMakeCCompiler.cmake 2025-12-04T10:15:55.2715557Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.31.6/CompilerIdCXX/ 2025-12-04T10:15:55.2716386Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.31.6/CompilerIdCXX/tmp/ 2025-12-04T10:15:55.2717353Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.31.6/CompilerIdCXX/CMakeCXXCompilerId.cpp 2025-12-04T10:15:55.2718355Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.31.6/CompilerIdCXX/a.out 2025-12-04T10:15:55.2719261Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.31.6/CMakeCXXCompiler.cmake 2025-12-04T10:15:55.2720237Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.31.6/CMakeDetermineCompilerABI_C.bin 2025-12-04T10:15:55.2721316Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.31.6/CMakeDetermineCompilerABI_CXX.bin 2025-12-04T10:15:55.2722209Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeScratch/ 2025-12-04T10:15:55.2722924Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeTmp/ 2025-12-04T10:15:55.2723674Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/cmake.check_cache 2025-12-04T10:15:55.2724466Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/ 2025-12-04T10:15:55.2725352Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/compiler_depend.ts 2025-12-04T10:15:55.2726353Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/compiler_depend.make 2025-12-04T10:15:55.2727332Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/depend.make 2025-12-04T10:15:55.2728235Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/link.txt 2025-12-04T10:15:55.2729175Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/cmake_clean.cmake 2025-12-04T10:15:55.2730113Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/build.make 2025-12-04T10:15:55.2731084Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/DependInfo.cmake 2025-12-04T10:15:55.2732028Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/flags.make 2025-12-04T10:15:55.2732948Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/progress.make 2025-12-04T10:15:55.2733941Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/test_jit_hooks.cpp.o.d 2025-12-04T10:15:55.2764497Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/test_jit_hooks.cpp.o 2025-12-04T10:15:55.2765573Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeDirectoryInformation.cmake 2025-12-04T10:15:55.2766513Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/TargetDirectories.txt 2025-12-04T10:15:55.2767356Z extracting: build/custom_test_artifacts/jit-hook-build/CMakeFiles/progress.marks 2025-12-04T10:15:55.2768130Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/Makefile2 2025-12-04T10:15:55.2768896Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/Makefile.cmake 2025-12-04T10:15:55.2769689Z inflating: build/custom_test_artifacts/jit-hook-build/hipblaslt_test_outer_vec.cc 2025-12-04T10:15:55.2770472Z inflating: build/custom_test_artifacts/jit-hook-build/hipblaslt_test_vec_ext.cc 2025-12-04T10:15:55.2771267Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeCache.txt 2025-12-04T10:15:55.2771932Z inflating: build/custom_test_artifacts/jit-hook-build/Makefile 2025-12-04T10:15:55.2772604Z inflating: build/custom_test_artifacts/jit-hook-build/cmake_install.cmake 2025-12-04T10:15:55.2788382Z inflating: build/custom_test_artifacts/jit-hook-build/test_jit_hooks 2025-12-04T10:15:55.2789095Z creating: build/custom_test_artifacts/custom-backend-build/ 2025-12-04T10:15:55.2789768Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/ 2025-12-04T10:15:55.2790529Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/pkgRedirects/ 2025-12-04T10:15:55.2791499Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeConfigureLog.yaml 2025-12-04T10:15:55.2792355Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.31.6/ 2025-12-04T10:15:55.2793198Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.31.6/CMakeSystem.cmake 2025-12-04T10:15:55.2794108Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.31.6/CompilerIdC/ 2025-12-04T10:15:55.2794991Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.31.6/CompilerIdC/tmp/ 2025-12-04T10:15:55.2796010Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.31.6/CompilerIdC/CMakeCCompilerId.c 2025-12-04T10:15:55.2797020Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.31.6/CompilerIdC/a.out 2025-12-04T10:15:55.2797967Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.31.6/CMakeCCompiler.cmake 2025-12-04T10:15:55.2798898Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.31.6/CompilerIdCXX/ 2025-12-04T10:15:55.2799796Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.31.6/CompilerIdCXX/tmp/ 2025-12-04T10:15:55.2800897Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.31.6/CompilerIdCXX/CMakeCXXCompilerId.cpp 2025-12-04T10:15:55.2801951Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.31.6/CompilerIdCXX/a.out 2025-12-04T10:15:55.2802936Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.31.6/CMakeCXXCompiler.cmake 2025-12-04T10:15:55.2803975Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.31.6/CMakeDetermineCompilerABI_C.bin 2025-12-04T10:15:55.2805084Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.31.6/CMakeDetermineCompilerABI_CXX.bin 2025-12-04T10:15:55.2806051Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeScratch/ 2025-12-04T10:15:55.2806832Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeTmp/ 2025-12-04T10:15:55.2807640Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/cmake.check_cache 2025-12-04T10:15:55.2808495Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/ 2025-12-04T10:15:55.2809603Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/compiler_depend.ts 2025-12-04T10:15:55.2810737Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/compiler_depend.make 2025-12-04T10:15:55.2811807Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/depend.make 2025-12-04T10:15:55.2812781Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/link.txt 2025-12-04T10:15:55.2813792Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/cmake_clean.cmake 2025-12-04T10:15:55.2814809Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/build.make 2025-12-04T10:15:55.2815820Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/DependInfo.cmake 2025-12-04T10:15:55.2816831Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/flags.make 2025-12-04T10:15:55.2817829Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/progress.make 2025-12-04T10:15:55.2819064Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/custom_backend.cpp.o.d 2025-12-04T10:15:55.2867041Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/custom_backend.cpp.o 2025-12-04T10:15:55.2868128Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/ 2025-12-04T10:15:55.2869185Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/compiler_depend.ts 2025-12-04T10:15:55.2870344Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/compiler_depend.make 2025-12-04T10:15:55.2871519Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/depend.make 2025-12-04T10:15:55.2872557Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/link.txt 2025-12-04T10:15:55.2873626Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/cmake_clean.cmake 2025-12-04T10:15:55.2874692Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/build.make 2025-12-04T10:15:55.2875773Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/DependInfo.cmake 2025-12-04T10:15:55.2876850Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/flags.make 2025-12-04T10:15:55.2877902Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/progress.make 2025-12-04T10:15:55.2880478Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/test_custom_backend.cpp.o.d 2025-12-04T10:15:55.2910260Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/test_custom_backend.cpp.o 2025-12-04T10:15:55.2911585Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeDirectoryInformation.cmake 2025-12-04T10:15:55.2912602Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/TargetDirectories.txt 2025-12-04T10:15:55.2913515Z extracting: build/custom_test_artifacts/custom-backend-build/CMakeFiles/progress.marks 2025-12-04T10:15:55.2914345Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/Makefile2 2025-12-04T10:15:55.2915159Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/Makefile.cmake 2025-12-04T10:15:55.2916027Z inflating: build/custom_test_artifacts/custom-backend-build/hipblaslt_test_outer_vec.cc 2025-12-04T10:15:55.2916870Z inflating: build/custom_test_artifacts/custom-backend-build/hipblaslt_test_vec_ext.cc 2025-12-04T10:15:55.2917790Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeCache.txt 2025-12-04T10:15:55.2918504Z inflating: build/custom_test_artifacts/custom-backend-build/Makefile 2025-12-04T10:15:55.2919244Z inflating: build/custom_test_artifacts/custom-backend-build/cmake_install.cmake 2025-12-04T10:15:55.2967573Z inflating: build/custom_test_artifacts/custom-backend-build/libcustom_backend.so 2025-12-04T10:15:55.2988297Z inflating: build/custom_test_artifacts/custom-backend-build/test_custom_backend 2025-12-04T10:15:55.2988938Z creating: build/lib/ 2025-12-04T10:15:55.3033428Z inflating: build/lib/libprotobuf-lite.a 2025-12-04T10:15:55.3275531Z inflating: build/lib/libprotobuf.a 2025-12-04T10:15:55.3547193Z inflating: build/lib/libprotoc.a 2025-12-04T10:15:55.3552512Z inflating: build/lib/libpthreadpool.a 2025-12-04T10:15:55.3556385Z inflating: build/lib/libcpuinfo.a 2025-12-04T10:15:55.3560353Z inflating: build/lib/libcpuinfo_internals.a 2025-12-04T10:15:55.3560913Z inflating: build/lib/libclog.a 2025-12-04T10:15:55.3571191Z inflating: build/lib/libpytorch_qnnpack.a 2025-12-04T10:15:55.3573086Z inflating: build/lib/libnnpack_reference_layers.a 2025-12-04T10:15:55.3582845Z inflating: build/lib/libnnpack.a 2025-12-04T10:15:55.3683927Z inflating: build/lib/libmicrokernels-prod.a 2025-12-04T10:15:55.4150308Z inflating: build/lib/libmicrokernels-all.a 2025-12-04T10:15:55.4188076Z inflating: build/lib/libgtest.a 2025-12-04T10:15:55.4197355Z inflating: build/lib/libgmock.a 2025-12-04T10:15:55.4197858Z inflating: build/lib/libgtest_main.a 2025-12-04T10:15:55.4198299Z inflating: build/lib/libgmock_main.a 2025-12-04T10:15:55.4246791Z inflating: build/lib/libXNNPACK.a 2025-12-04T10:15:55.4288259Z inflating: build/lib/libbenchmark.a 2025-12-04T10:15:55.4288791Z inflating: build/lib/libbenchmark_main.a 2025-12-04T10:15:55.4289265Z inflating: build/lib/libjitprofiling.a 2025-12-04T10:15:55.4292570Z inflating: build/lib/libittnotify.a 2025-12-04T10:15:55.4329002Z inflating: build/lib/libasmjit.a 2025-12-04T10:15:55.4949793Z inflating: build/lib/libfbgemm.a 2025-12-04T10:15:55.4966299Z inflating: build/lib/libtensorpipe_uv.a 2025-12-04T10:15:55.5260399Z inflating: build/lib/libtensorpipe.a 2025-12-04T10:15:55.5326015Z inflating: build/lib/libgloo.a 2025-12-04T10:15:55.5351565Z inflating: build/lib/libonnx_proto.a 2025-12-04T10:15:55.5572229Z inflating: build/lib/libgloo_hip.a 2025-12-04T10:15:55.5962932Z inflating: build/lib/libonnx.a 2025-12-04T10:15:56.1469137Z inflating: build/lib/libdnnl.a 2025-12-04T10:15:56.1479031Z inflating: build/lib/libfmt.a 2025-12-04T10:15:56.1647607Z inflating: build/lib/libkineto.a 2025-12-04T10:15:56.1711655Z inflating: build/lib/libc10.so 2025-12-04T10:15:56.1712189Z inflating: build/lib/libtorch_global_deps.so 2025-12-04T10:15:56.1713724Z inflating: build/lib/libcaffe2_nvrtc.so 2025-12-04T10:15:56.1738075Z inflating: build/lib/libc10_hip.so 2025-12-04T10:15:56.2010956Z inflating: build/lib/libfbgemm_genai.a 2025-12-04T10:15:57.8893366Z inflating: build/lib/libtorch_cpu.so 2025-12-04T10:15:57.8894849Z inflating: build/lib/libshm.so 2025-12-04T10:15:58.7171463Z inflating: build/lib/libtorch_hip.so 2025-12-04T10:15:58.7172042Z inflating: build/lib/libtorch.so 2025-12-04T10:15:58.7182546Z inflating: build/lib/libjitbackend_test.so 2025-12-04T10:15:58.7195925Z inflating: build/lib/libbackend_with_compiler.so 2025-12-04T10:15:58.7234993Z inflating: build/lib/libtorchbind_test.so 2025-12-04T10:15:58.7249479Z inflating: build/lib/libaoti_custom_ops.so 2025-12-04T10:15:58.8535594Z inflating: build/lib/libtorch_python.so 2025-12-04T10:15:58.8555050Z inflating: build/lib/libnnapi_backend.so 2025-12-04T10:15:58.8555548Z creating: build/bin/ 2025-12-04T10:15:58.8555924Z creating: build/bin/CMakeFiles/ 2025-12-04T10:15:58.8557028Z inflating: build/bin/cmake_install.cmake 2025-12-04T10:15:58.8557475Z inflating: build/bin/CTestTestfile.cmake 2025-12-04T10:15:58.8806417Z inflating: build/bin/protoc-3.13.0.0 2025-12-04T10:15:58.9057160Z inflating: build/bin/protoc 2025-12-04T10:15:58.9089860Z inflating: build/bin/c10_AllocatorConfig_test 2025-12-04T10:15:58.9120359Z inflating: build/bin/c10_CompileTimeFunctionPointer_test 2025-12-04T10:15:58.9151875Z inflating: build/bin/c10_DeviceGuard_test 2025-12-04T10:15:58.9183251Z inflating: build/bin/c10_Device_test 2025-12-04T10:15:58.9219188Z inflating: build/bin/c10_DispatchKeySet_test 2025-12-04T10:15:58.9251541Z inflating: build/bin/c10_Scalar_test 2025-12-04T10:15:58.9281230Z inflating: build/bin/c10_StreamGuard_test 2025-12-04T10:15:58.9315552Z inflating: build/bin/c10_SymInt_test 2025-12-04T10:15:58.9349547Z inflating: build/bin/c10_SizesAndStrides_test 2025-12-04T10:15:58.9381637Z inflating: build/bin/c10_Bitset_test 2025-12-04T10:15:58.9423476Z inflating: build/bin/c10_cow_test 2025-12-04T10:15:58.9456237Z inflating: build/bin/c10_InlineDeviceGuard_test 2025-12-04T10:15:58.9489999Z inflating: build/bin/c10_InlineStreamGuard_test 2025-12-04T10:15:58.9520097Z inflating: build/bin/c10_ArrayRef_test 2025-12-04T10:15:58.9550236Z inflating: build/bin/c10_ConstexprCrc_test 2025-12-04T10:15:58.9580675Z inflating: build/bin/c10_DeadlockDetection_test 2025-12-04T10:15:58.9612662Z inflating: build/bin/c10_IntrusiveList_test 2025-12-04T10:15:58.9643490Z inflating: build/bin/c10_Half_test 2025-12-04T10:15:58.9678084Z inflating: build/bin/c10_Enumerate_test 2025-12-04T10:15:58.9711902Z inflating: build/bin/c10_LeftRight_test 2025-12-04T10:15:58.9744063Z inflating: build/bin/c10_NetworkFlow_test 2025-12-04T10:15:58.9774262Z inflating: build/bin/c10_Semaphore_test 2025-12-04T10:15:58.9804917Z inflating: build/bin/c10_Synchronized_test 2025-12-04T10:15:58.9836419Z inflating: build/bin/c10_TypeIndex_test 2025-12-04T10:15:58.9869703Z inflating: build/bin/c10_ThreadLocal_test 2025-12-04T10:15:58.9901355Z inflating: build/bin/c10_accumulate_test 2025-12-04T10:15:58.9935192Z inflating: build/bin/c10_bfloat16_test 2025-12-04T10:15:58.9965264Z inflating: build/bin/c10_error_test 2025-12-04T10:15:58.9996040Z inflating: build/bin/c10_bit_cast_test 2025-12-04T10:15:59.0029501Z inflating: build/bin/c10_complex_test 2025-12-04T10:15:59.0061451Z inflating: build/bin/c10_exception_test 2025-12-04T10:15:59.0095733Z inflating: build/bin/c10_complex_math_test 2025-12-04T10:15:59.0126490Z inflating: build/bin/c10_flags_test 2025-12-04T10:15:59.0157508Z inflating: build/bin/c10_irange_test 2025-12-04T10:15:59.0188313Z inflating: build/bin/c10_generic_math_test 2025-12-04T10:15:59.0278775Z inflating: build/bin/c10_intrusive_ptr_test 2025-12-04T10:15:59.0320546Z inflating: build/bin/c10_logging_test 2025-12-04T10:15:59.0343488Z inflating: build/bin/c10_nofatal_test 2025-12-04T10:15:59.0375409Z inflating: build/bin/c10_lazy_test 2025-12-04T10:15:59.0413039Z inflating: build/bin/c10_ordered_preserving_dict_test 2025-12-04T10:15:59.0444879Z inflating: build/bin/c10_registry_test 2025-12-04T10:15:59.0476413Z inflating: build/bin/c10_ssize_test 2025-12-04T10:15:59.0521124Z inflating: build/bin/c10_optional_test 2025-12-04T10:15:59.0608075Z inflating: build/bin/c10_small_vector_test 2025-12-04T10:15:59.0642322Z inflating: build/bin/c10_string_util_test 2025-12-04T10:15:59.0672694Z inflating: build/bin/c10_tempfile_test 2025-12-04T10:15:59.0702680Z inflating: build/bin/c10_string_view_test 2025-12-04T10:15:59.0729426Z inflating: build/bin/c10_intrusive_ptr_benchmark 2025-12-04T10:15:59.0762908Z inflating: build/bin/c10_typeid_test 2025-12-04T10:15:59.0793174Z inflating: build/bin/c10_hip_HIPAssertionsTest_1_var_test 2025-12-04T10:15:59.0823032Z inflating: build/bin/c10_hip_HIPAssertionsTest_catches_stream 2025-12-04T10:15:59.0853674Z inflating: build/bin/c10_hip_HIPAssertionsTest_catches_thread_and_block_and_device 2025-12-04T10:15:59.0882763Z inflating: build/bin/c10_hip_HIPAssertionsTest_from_2_processes 2025-12-04T10:15:59.0912658Z inflating: build/bin/c10_hip_HIPAssertionsTest_multiple_writes_from_blocks_and_threads 2025-12-04T10:15:59.0942516Z inflating: build/bin/c10_hip_HIPAssertionsTest_multiple_writes_from_multiple_blocks 2025-12-04T10:15:59.0972054Z inflating: build/bin/c10_hip_HIPAssertionsTest_multiple_writes_from_same_block 2025-12-04T10:15:59.1002127Z inflating: build/bin/c10_hip_HIPTest 2025-12-04T10:15:59.1326754Z inflating: build/bin/vec_test_all_types_DEFAULT 2025-12-04T10:15:59.1659355Z inflating: build/bin/vec_test_all_types_AVX512 2025-12-04T10:15:59.2000474Z inflating: build/bin/vec_test_all_types_AVX2 2025-12-04T10:15:59.2057550Z inflating: build/bin/test_aoti_abi_check 2025-12-04T10:15:59.2087439Z inflating: build/bin/test_vec_half_DEFAULT 2025-12-04T10:15:59.2118021Z inflating: build/bin/test_vec_half_AVX2 2025-12-04T10:15:59.2148442Z inflating: build/bin/test_vec_half_AVX512 2025-12-04T10:15:59.2180306Z inflating: build/bin/BackoffTest 2025-12-04T10:15:59.2212584Z inflating: build/bin/FileStoreTest 2025-12-04T10:15:59.2246891Z inflating: build/bin/TCPStoreTest 2025-12-04T10:15:59.2279428Z inflating: build/bin/HashStoreTest 2025-12-04T10:15:59.2319550Z inflating: build/bin/ProcessGroupGlooTest 2025-12-04T10:15:59.2320554Z inflating: build/bin/example_allreduce 2025-12-04T10:15:59.2323103Z inflating: build/bin/torch_shm_manager 2025-12-04T10:15:59.2355624Z inflating: build/bin/static_runtime_bench 2025-12-04T10:15:59.2498126Z inflating: build/bin/static_runtime_test 2025-12-04T10:15:59.2541527Z inflating: build/bin/Dict_test 2025-12-04T10:15:59.2573598Z inflating: build/bin/Dimname_test 2025-12-04T10:15:59.2612469Z inflating: build/bin/MaybeOwned_test 2025-12-04T10:15:59.2646806Z inflating: build/bin/NamedTensor_test 2025-12-04T10:15:59.2682127Z inflating: build/bin/apply_utils_test 2025-12-04T10:15:59.2717701Z inflating: build/bin/atest 2025-12-04T10:15:59.2756095Z inflating: build/bin/basic 2025-12-04T10:15:59.2788718Z inflating: build/bin/broadcast_test 2025-12-04T10:15:59.2819817Z inflating: build/bin/cpu_allocator_test 2025-12-04T10:15:59.2854948Z inflating: build/bin/cpu_generator_test 2025-12-04T10:15:59.2886820Z inflating: build/bin/cpu_profiling_allocator_test 2025-12-04T10:15:59.2941443Z inflating: build/bin/cpu_rng_test 2025-12-04T10:15:59.2972838Z inflating: build/bin/dlconvertor_test 2025-12-04T10:15:59.3007708Z inflating: build/bin/extension_backend_test 2025-12-04T10:15:59.3041134Z inflating: build/bin/half_test 2025-12-04T10:15:59.3098172Z inflating: build/bin/ivalue_test 2025-12-04T10:15:59.3128571Z inflating: build/bin/lazy_tensor_test 2025-12-04T10:15:59.3160767Z inflating: build/bin/math_kernel_test 2025-12-04T10:15:59.3192743Z inflating: build/bin/memory_format_test 2025-12-04T10:15:59.3225129Z inflating: build/bin/memory_overlapping_test 2025-12-04T10:15:59.3257391Z inflating: build/bin/mobile_memory_cleanup 2025-12-04T10:15:59.3291230Z inflating: build/bin/native_test 2025-12-04T10:15:59.3322391Z inflating: build/bin/operator_name_test 2025-12-04T10:15:59.3353101Z inflating: build/bin/operators_test 2025-12-04T10:15:59.3384556Z inflating: build/bin/packedtensoraccessor_test 2025-12-04T10:15:59.3425164Z inflating: build/bin/pow_test 2025-12-04T10:15:59.3459549Z inflating: build/bin/quantized_test 2025-12-04T10:15:59.3489980Z inflating: build/bin/reduce_ops_test 2025-12-04T10:15:59.3521738Z inflating: build/bin/reportMemoryUsage_test 2025-12-04T10:15:59.3554364Z inflating: build/bin/scalar_tensor_test 2025-12-04T10:15:59.3589458Z inflating: build/bin/scalar_test 2025-12-04T10:15:59.3620728Z inflating: build/bin/StorageUtils_test 2025-12-04T10:15:59.3653181Z inflating: build/bin/stride_properties_test 2025-12-04T10:15:59.3699191Z inflating: build/bin/tensor_iterator_test 2025-12-04T10:15:59.3731969Z inflating: build/bin/test_parallel 2025-12-04T10:15:59.3762936Z inflating: build/bin/thread_init_test 2025-12-04T10:15:59.3796033Z inflating: build/bin/type_ptr_test 2025-12-04T10:15:59.3831712Z inflating: build/bin/type_test 2025-12-04T10:15:59.3863474Z inflating: build/bin/undefined_tensor_test 2025-12-04T10:15:59.3893683Z inflating: build/bin/verify_api_visibility 2025-12-04T10:15:59.3935971Z inflating: build/bin/legacy_vmap_test 2025-12-04T10:15:59.3967175Z inflating: build/bin/weakref_test 2025-12-04T10:15:59.3998479Z inflating: build/bin/wrapdim_test 2025-12-04T10:15:59.4059410Z inflating: build/bin/List_test 2025-12-04T10:15:59.4090705Z inflating: build/bin/xla_tensor_test 2025-12-04T10:15:59.4126392Z inflating: build/bin/IListRef_test 2025-12-04T10:15:59.4195569Z inflating: build/bin/kernel_function_legacy_test 2025-12-04T10:15:59.4235130Z inflating: build/bin/KernelFunction_test 2025-12-04T10:15:59.4291173Z inflating: build/bin/kernel_function_test 2025-12-04T10:15:59.4364092Z inflating: build/bin/kernel_lambda_legacy_test 2025-12-04T10:15:59.4423517Z inflating: build/bin/kernel_lambda_test 2025-12-04T10:15:59.4459568Z inflating: build/bin/kernel_stackbased_test 2025-12-04T10:15:59.4515339Z inflating: build/bin/make_boxed_from_unboxed_functor_test 2025-12-04T10:15:59.4546388Z inflating: build/bin/CppSignature_test 2025-12-04T10:15:59.4576322Z inflating: build/bin/op_allowlist_test 2025-12-04T10:15:59.4751690Z inflating: build/bin/op_registration_test 2025-12-04T10:15:59.4781714Z inflating: build/bin/hip_complex_math_test 2025-12-04T10:15:59.4814994Z inflating: build/bin/backend_fallback_test 2025-12-04T10:15:59.4845009Z inflating: build/bin/hip_complex_test 2025-12-04T10:15:59.4884972Z inflating: build/bin/inline_container_test 2025-12-04T10:15:59.4916986Z inflating: build/bin/hip_apply_test 2025-12-04T10:15:59.4946880Z inflating: build/bin/hip_distributions_test 2025-12-04T10:15:59.4976866Z inflating: build/bin/hip_generator_test 2025-12-04T10:15:59.5006601Z inflating: build/bin/hip_half_test 2025-12-04T10:15:59.5036491Z inflating: build/bin/hip_integer_divider_test 2025-12-04T10:15:59.5066318Z inflating: build/bin/hip_optional_test 2025-12-04T10:15:59.5096197Z inflating: build/bin/hip_packedtensoraccessor_test 2025-12-04T10:15:59.5126056Z inflating: build/bin/hip_vectorized_test 2025-12-04T10:15:59.5157517Z inflating: build/bin/hip_dlconvertor_test 2025-12-04T10:15:59.5772839Z inflating: build/bin/test_jit 2025-12-04T10:15:59.5969589Z inflating: build/bin/test_lazy 2025-12-04T10:15:59.6003063Z inflating: build/bin/test_dist_autograd 2025-12-04T10:15:59.6043937Z inflating: build/bin/test_cpp_rpc 2025-12-04T10:15:59.6044799Z inflating: build/bin/parallel_benchmark 2025-12-04T10:15:59.6697195Z inflating: build/bin/test_api 2025-12-04T10:15:59.6697687Z creating: .additional_ci_files/ 2025-12-04T10:15:59.6732979Z inflating: .additional_ci_files/test-times.json 2025-12-04T10:15:59.6863195Z inflating: .additional_ci_files/test-class-times.json 2025-12-04T10:15:59.6905394Z ##[group]Run rm artifacts.zip 2025-12-04T10:15:59.6905809Z rm artifacts.zip 2025-12-04T10:15:59.6915278Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-12-04T10:15:59.6915754Z env: 2025-12-04T10:15:59.6916054Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:15:59.6916484Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T10:15:59.6917054Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T10:15:59.6917594Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T10:15:59.6919399Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T10:15:59.6921366Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T10:15:59.6921762Z AWS_REGION: us-east-1 2025-12-04T10:15:59.6922218Z AWS_ACCESS_KEY_ID: *** 2025-12-04T10:15:59.6922723Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T10:15:59.6929768Z AWS_SESSION_TOKEN: *** 2025-12-04T10:15:59.6930105Z ##[endgroup] 2025-12-04T10:15:59.7962206Z ##[group]Run df -H 2025-12-04T10:15:59.7962551Z df -H 2025-12-04T10:15:59.7972354Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-12-04T10:15:59.7972836Z env: 2025-12-04T10:15:59.7973138Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:15:59.7973570Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T10:15:59.7974142Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T10:15:59.7974682Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T10:15:59.7976921Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T10:15:59.7978740Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T10:15:59.7979131Z AWS_REGION: us-east-1 2025-12-04T10:15:59.7979584Z AWS_ACCESS_KEY_ID: *** 2025-12-04T10:15:59.7980065Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T10:15:59.7987394Z AWS_SESSION_TOKEN: *** 2025-12-04T10:15:59.7987736Z ##[endgroup] 2025-12-04T10:15:59.8359996Z Filesystem Size Used Avail Use% Mounted on 2025-12-04T10:15:59.8360472Z overlay 16T 375G 15T 3% / 2025-12-04T10:15:59.8360960Z tmpfs 68M 0 68M 0% /dev 2025-12-04T10:15:59.8361377Z /dev/md0 16T 375G 15T 3% /run 2025-12-04T10:15:59.8361790Z shm 68M 17k 68M 1% /dev/shm 2025-12-04T10:15:59.8362499Z amdprj2-k8s_2 5.5T 120G 5.4T 3% /home/runner/pytorch-data 2025-12-04T10:15:59.8363111Z tmpfs 3.3T 13k 3.3T 1% /run/secrets/kubernetes.io/serviceaccount 2025-12-04T10:15:59.8363660Z tmpfs 1.7T 0 1.7T 0% /proc/acpi 2025-12-04T10:15:59.8364097Z tmpfs 1.7T 0 1.7T 0% /proc/scsi 2025-12-04T10:15:59.8364532Z tmpfs 1.7T 0 1.7T 0% /sys/firmware 2025-12-04T10:15:59.8365030Z tmpfs 1.7T 0 1.7T 0% /sys/devices/virtual/powercap 2025-12-04T10:15:59.8409767Z Prepare all required actions 2025-12-04T10:15:59.8410301Z Getting action download info 2025-12-04T10:16:00.0710235Z ##[group]Run ./.github/actions/download-td-artifacts 2025-12-04T10:16:00.0710774Z with: 2025-12-04T10:16:00.0711069Z env: 2025-12-04T10:16:00.0711371Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:16:00.0711814Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T10:16:00.0712393Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T10:16:00.0712930Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T10:16:00.0714595Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T10:16:00.0716259Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T10:16:00.0716724Z AWS_REGION: us-east-1 2025-12-04T10:16:00.0717251Z AWS_ACCESS_KEY_ID: *** 2025-12-04T10:16:00.0717734Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T10:16:00.0725125Z AWS_SESSION_TOKEN: *** 2025-12-04T10:16:00.0725463Z ##[endgroup] 2025-12-04T10:16:00.0766773Z ##[group]Run seemethere/download-artifact-s3@v4 2025-12-04T10:16:00.0767211Z with: 2025-12-04T10:16:00.0767501Z name: td_results 2025-12-04T10:16:00.0767825Z s3-bucket: gha-artifacts 2025-12-04T10:16:00.0768170Z region: us-east-1 2025-12-04T10:16:00.0768471Z env: 2025-12-04T10:16:00.0768758Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:16:00.0769190Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T10:16:00.0769757Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T10:16:00.0770301Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T10:16:00.0772276Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T10:16:00.0773858Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T10:16:00.0774230Z AWS_REGION: us-east-1 2025-12-04T10:16:00.0774645Z AWS_ACCESS_KEY_ID: *** 2025-12-04T10:16:00.0775120Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T10:16:00.0782171Z AWS_SESSION_TOKEN: *** 2025-12-04T10:16:00.0782499Z ##[endgroup] 2025-12-04T10:16:00.3103179Z (node:17116) NOTE: We are formalizing our plans to enter AWS SDK for JavaScript (v2) into maintenance mode in 2023. 2025-12-04T10:16:00.3103797Z 2025-12-04T10:16:00.3104067Z Please migrate your code to use AWS SDK for JavaScript (v3). 2025-12-04T10:16:00.3105198Z For more information, check the migration guide at https://a.co/7PzMCcy 2025-12-04T10:16:00.3105884Z (Use `node --trace-warnings ...` to show where the warning was created) 2025-12-04T10:16:00.5929996Z Found 1 objects with prefix pytorch/pytorch/19922849170/td_results/ 2025-12-04T10:16:00.5930851Z Starting download (1/1): /home/runner/_work/pytorch/pytorch/td_results.json 2025-12-04T10:16:01.0405224Z Finished download (1/1): /home/runner/_work/pytorch/pytorch/td_results.json 2025-12-04T10:16:01.0416088Z Artifact download has finished successfully 2025-12-04T10:16:01.0693983Z ##[group]Run mkdir -p .additional_ci_files 2025-12-04T10:16:01.0694479Z mkdir -p .additional_ci_files 2025-12-04T10:16:01.0695022Z mv td_results.json .additional_ci_files/td_results.json || true 2025-12-04T10:16:01.0704862Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-12-04T10:16:01.0705369Z env: 2025-12-04T10:16:01.0705682Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:16:01.0706115Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T10:16:01.0706679Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T10:16:01.0707209Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T10:16:01.0709139Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T10:16:01.0710829Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T10:16:01.0711205Z AWS_REGION: us-east-1 2025-12-04T10:16:01.0711769Z AWS_ACCESS_KEY_ID: *** 2025-12-04T10:16:01.0712258Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T10:16:01.0719214Z AWS_SESSION_TOKEN: *** 2025-12-04T10:16:01.0719545Z ##[endgroup] 2025-12-04T10:16:01.0826100Z ##[group]Run .github/scripts/parse_ref.py 2025-12-04T10:16:01.0826593Z .github/scripts/parse_ref.py 2025-12-04T10:16:01.0835525Z shell: /usr/bin/bash -e {0} 2025-12-04T10:16:01.0835874Z env: 2025-12-04T10:16:01.0836167Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:16:01.0836604Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T10:16:01.0837178Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T10:16:01.0837714Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T10:16:01.0839368Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T10:16:01.0841038Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T10:16:01.0841415Z AWS_REGION: us-east-1 2025-12-04T10:16:01.0841895Z AWS_ACCESS_KEY_ID: *** 2025-12-04T10:16:01.0842384Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T10:16:01.0849397Z AWS_SESSION_TOKEN: *** 2025-12-04T10:16:01.0849730Z ##[endgroup] 2025-12-04T10:16:01.0987618Z Setting output branch=main 2025-12-04T10:16:01.1143030Z Prepare all required actions 2025-12-04T10:16:01.1143621Z Getting action download info 2025-12-04T10:16:01.3442977Z ##[group]Run ./.github/actions/filter-test-configs 2025-12-04T10:16:01.3443446Z with: 2025-12-04T10:16:01.3444053Z github-token: *** 2025-12-04T10:16:01.3454127Z test-matrix: {"include": [{"config": "default", "shard": 1, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "default", "shard": 1, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "default", "shard": 2, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "default", "shard": 2, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "default", "shard": 3, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "default", "shard": 3, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "default", "shard": 4, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "default", "shard": 4, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "default", "shard": 5, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "default", "shard": 5, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "default", "shard": 6, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "default", "shard": 6, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "distributed", "shard": 1, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "distributed", "shard": 1, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "distributed", "shard": 2, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "distributed", "shard": 2, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "distributed", "shard": 3, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "distributed", "shard": 3, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}]} 2025-12-04T10:16:01.3464939Z job-name: linux-jammy-rocm-py3.10 / test (distributed, 1, 3, linux.rocm.gpu.gfx942.4.b, mem_leak_check, unstable) 2025-12-04T10:16:01.3465653Z env: 2025-12-04T10:16:01.3465963Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:16:01.3466420Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T10:16:01.3467009Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T10:16:01.3467554Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T10:16:01.3469226Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T10:16:01.3470911Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T10:16:01.3471508Z AWS_REGION: us-east-1 2025-12-04T10:16:01.3471910Z AWS_ACCESS_KEY_ID: *** 2025-12-04T10:16:01.3472397Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T10:16:01.3479464Z AWS_SESSION_TOKEN: *** 2025-12-04T10:16:01.3505559Z ##[endgroup] 2025-12-04T10:16:01.3555541Z ##[group]Run nick-fields/retry@v3.0.0 2025-12-04T10:16:01.3555950Z with: 2025-12-04T10:16:01.3556236Z shell: bash 2025-12-04T10:16:01.3556538Z timeout_minutes: 10 2025-12-04T10:16:01.3556869Z max_attempts: 5 2025-12-04T10:16:01.3557187Z retry_wait_seconds: 30 2025-12-04T10:16:01.3558152Z command: set -eux # PyYAML 6.0 doesn't work with MacOS x86 anymore # This must run on Python-3.7 (AmazonLinux2) so can't use request=3.32.2 python3 -m pip install requests==2.27.1 pyyaml==6.0.2 2025-12-04T10:16:01.3559208Z polling_interval_seconds: 1 2025-12-04T10:16:01.3559704Z warning_on_retry: true 2025-12-04T10:16:01.3560054Z continue_on_error: false 2025-12-04T10:16:01.3560415Z env: 2025-12-04T10:16:01.3560773Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:16:01.3561215Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T10:16:01.3561799Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T10:16:01.3562342Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T10:16:01.3563994Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T10:16:01.3565599Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T10:16:01.3565982Z AWS_REGION: us-east-1 2025-12-04T10:16:01.3566403Z AWS_ACCESS_KEY_ID: *** 2025-12-04T10:16:01.3566889Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T10:16:01.3574059Z AWS_SESSION_TOKEN: *** 2025-12-04T10:16:01.3574552Z GITHUB_TOKEN: *** 2025-12-04T10:16:01.3574870Z ##[endgroup] 2025-12-04T10:16:01.3985107Z + python3 -m pip install requests==2.27.1 pyyaml==6.0.2 2025-12-04T10:16:01.5395091Z Defaulting to user installation because normal site-packages is not writeable 2025-12-04T10:16:01.6395712Z Collecting requests==2.27.1 2025-12-04T10:16:01.6783831Z Downloading requests-2.27.1-py2.py3-none-any.whl (63 kB) 2025-12-04T10:16:01.6893684Z ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 63.1/63.1 KB 6.1 MB/s eta 0:00:00 2025-12-04T10:16:01.7383526Z Collecting pyyaml==6.0.2 2025-12-04T10:16:01.7456913Z Downloading PyYAML-6.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (751 kB) 2025-12-04T10:16:01.7665652Z ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 751.2/751.2 KB 38.5 MB/s eta 0:00:00 2025-12-04T10:16:01.8651298Z Collecting charset-normalizer~=2.0.0 2025-12-04T10:16:01.8708405Z Downloading charset_normalizer-2.0.12-py3-none-any.whl (39 kB) 2025-12-04T10:16:01.8847539Z Collecting idna<4,>=2.5 2025-12-04T10:16:01.8901623Z Downloading idna-3.11-py3-none-any.whl (71 kB) 2025-12-04T10:16:01.8921183Z ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 71.0/71.0 KB 99.9 MB/s eta 0:00:00 2025-12-04T10:16:01.9191961Z Collecting urllib3<1.27,>=1.21.1 2025-12-04T10:16:01.9245971Z Downloading urllib3-1.26.20-py2.py3-none-any.whl (144 kB) 2025-12-04T10:16:01.9263395Z ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 144.2/144.2 KB 183.9 MB/s eta 0:00:00 2025-12-04T10:16:01.9453560Z Collecting certifi>=2017.4.17 2025-12-04T10:16:01.9511013Z Downloading certifi-2025.11.12-py3-none-any.whl (159 kB) 2025-12-04T10:16:01.9529206Z ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 159.4/159.4 KB 220.3 MB/s eta 0:00:00 2025-12-04T10:16:02.0093984Z Installing collected packages: urllib3, pyyaml, idna, charset-normalizer, certifi, requests 2025-12-04T10:16:02.1018020Z WARNING: The script normalizer is installed in '/home/runner/.local/bin' which is not on PATH. 2025-12-04T10:16:02.1019133Z Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. 2025-12-04T10:16:02.1187014Z Successfully installed certifi-2025.11.12 charset-normalizer-2.0.12 idna-3.11 pyyaml-6.0.2 requests-2.27.1 urllib3-1.26.20 2025-12-04T10:16:02.3981606Z Command completed after 1 attempt(s). 2025-12-04T10:16:02.4067523Z ##[group]Run set -x 2025-12-04T10:16:02.4067892Z set -x 2025-12-04T10:16:02.4068193Z  2025-12-04T10:16:02.4068685Z # Use relative path here as this could be checked out anywhere, not necessarily 2025-12-04T10:16:02.4069289Z # in runner workspace 2025-12-04T10:16:02.4069791Z python3 "${GITHUB_ACTION_PATH}/../../scripts/parse_ref.py" 2025-12-04T10:16:02.4079926Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-12-04T10:16:02.4080406Z env: 2025-12-04T10:16:02.4080788Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:16:02.4081227Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T10:16:02.4081987Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T10:16:02.4082547Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T10:16:02.4084199Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T10:16:02.4085799Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T10:16:02.4086182Z AWS_REGION: us-east-1 2025-12-04T10:16:02.4086632Z AWS_ACCESS_KEY_ID: *** 2025-12-04T10:16:02.4087132Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T10:16:02.4094156Z AWS_SESSION_TOKEN: *** 2025-12-04T10:16:02.4094500Z ##[endgroup] 2025-12-04T10:16:02.4133118Z + python3 /home/runner/_work/pytorch/pytorch/./.github/actions/filter-test-configs/../../scripts/parse_ref.py 2025-12-04T10:16:02.4231456Z Setting output branch=main 2025-12-04T10:16:02.4285209Z ##[group]Run echo "Workflow: ${GITHUB_WORKFLOW}" 2025-12-04T10:16:02.4285800Z echo "Workflow: ${GITHUB_WORKFLOW}" 2025-12-04T10:16:02.4286234Z echo "Job name: ${JOB_NAME}" 2025-12-04T10:16:02.4286618Z  2025-12-04T10:16:02.4287097Z # Use relative path here as this could be checked out anywhere, not necessarily 2025-12-04T10:16:02.4287683Z # in runner workspace 2025-12-04T10:16:02.4288222Z python3 "${GITHUB_ACTION_PATH}/../../scripts/filter_test_configs.py" \ 2025-12-04T10:16:02.4288828Z  --workflow "${GITHUB_WORKFLOW}" \ 2025-12-04T10:16:02.4289260Z  --job-name "${JOB_NAME}" \ 2025-12-04T10:16:02.4299446Z  --test-matrix "{"include": [{"config": "default", "shard": 1, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "default", "shard": 1, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "default", "shard": 2, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "default", "shard": 2, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "default", "shard": 3, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "default", "shard": 3, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "default", "shard": 4, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "default", "shard": 4, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "default", "shard": 5, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "default", "shard": 5, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "default", "shard": 6, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "default", "shard": 6, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "distributed", "shard": 1, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "distributed", "shard": 1, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "distributed", "shard": 2, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "distributed", "shard": 2, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "distributed", "shard": 3, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "distributed", "shard": 3, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}]}" \ 2025-12-04T10:16:02.4309823Z  --selected-test-configs "" \ 2025-12-04T10:16:02.4310255Z  --pr-number "${PR_NUMBER}" \ 2025-12-04T10:16:02.4310701Z  --tag "${TAG}" \ 2025-12-04T10:16:02.4311085Z  --event-name "${EVENT_NAME}" \ 2025-12-04T10:16:02.4311493Z  --schedule "${SCHEDULE}" \ 2025-12-04T10:16:02.4311894Z  --branch "${HEAD_BRANCH}" 2025-12-04T10:16:02.4321867Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-12-04T10:16:02.4322346Z env: 2025-12-04T10:16:02.4322649Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:16:02.4323090Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T10:16:02.4323659Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T10:16:02.4324197Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T10:16:02.4325847Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T10:16:02.4327442Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T10:16:02.4327814Z AWS_REGION: us-east-1 2025-12-04T10:16:02.4328275Z AWS_ACCESS_KEY_ID: *** 2025-12-04T10:16:02.4328764Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T10:16:02.4335910Z AWS_SESSION_TOKEN: *** 2025-12-04T10:16:02.4336454Z GITHUB_TOKEN: *** 2025-12-04T10:16:02.4337104Z JOB_NAME: linux-jammy-rocm-py3.10 / test (distributed, 1, 3, linux.rocm.gpu.gfx942.4.b, mem_leak_check, unstable) 2025-12-04T10:16:02.4337790Z PR_NUMBER: 2025-12-04T10:16:02.4338084Z TAG: 2025-12-04T10:16:02.4338361Z EVENT_NAME: schedule 2025-12-04T10:16:02.4338688Z SCHEDULE: 29 8 * * * 2025-12-04T10:16:02.4339005Z HEAD_BRANCH: main 2025-12-04T10:16:02.4339318Z ##[endgroup] 2025-12-04T10:16:02.4377045Z Workflow: trunk-rocm-mi300 2025-12-04T10:16:02.4377749Z Job name: linux-jammy-rocm-py3.10 / test (distributed, 1, 3, linux.rocm.gpu.gfx942.4.b, mem_leak_check, unstable) 2025-12-04T10:16:03.0071134Z INFO:root:Issue https://github.com/pytorch/pytorch/issues/167616 created by jithunnair-amd has unstable all the test jobs for trunk-rocm-mi300 / linux-jammy-rocm-py3.10 / test (distributed, 1, 3, linux.rocm.gpu.gfx942.4.b, mem_leak_check, unstable) 2025-12-04T10:16:03.0285498Z Setting output keep-going=True 2025-12-04T10:16:03.0286040Z Setting output ci-verbose-test-logs=False 2025-12-04T10:16:03.0286516Z Setting output ci-test-showlocals=False 2025-12-04T10:16:03.0287550Z Setting output ci-no-test-timeout=False 2025-12-04T10:16:03.0287984Z Setting output ci-no-td=False 2025-12-04T10:16:03.0288392Z Setting output ci-td-distributed=False 2025-12-04T10:16:03.0288815Z Setting output is-unstable=True 2025-12-04T10:16:03.0289224Z Setting output reenabled-issues= 2025-12-04T10:16:03.0312366Z Setting output test-matrix={"include": [{"config": "default", "shard": 1, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "default", "shard": 1, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable", "rerun_disabled_tests": "rerun_disabled_tests"}, {"config": "default", "shard": 1, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable", "mem_leak_check": "mem_leak_check"}, {"config": "default", "shard": 1, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "default", "shard": 2, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "default", "shard": 2, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable", "rerun_disabled_tests": "rerun_disabled_tests"}, {"config": "default", "shard": 2, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable", "mem_leak_check": "mem_leak_check"}, {"config": "default", "shard": 2, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "default", "shard": 3, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "default", "shard": 3, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable", "rerun_disabled_tests": "rerun_disabled_tests"}, {"config": "default", "shard": 3, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable", "mem_leak_check": "mem_leak_check"}, {"config": "default", "shard": 3, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "default", "shard": 4, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "default", "shard": 4, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable", "rerun_disabled_tests": "rerun_disabled_tests"}, {"config": "default", "shard": 4, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable", "mem_leak_check": "mem_leak_check"}, {"config": "default", "shard": 4, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "default", "shard": 5, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "default", "shard": 5, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable", "rerun_disabled_tests": "rerun_disabled_tests"}, {"config": "default", "shard": 5, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable", "mem_leak_check": "mem_leak_check"}, {"config": "default", "shard": 5, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "default", "shard": 6, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "default", "shard": 6, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable", "rerun_disabled_tests": "rerun_disabled_tests"}, {"config": "default", "shard": 6, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable", "mem_leak_check": "mem_leak_check"}, {"config": "default", "shard": 6, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "distributed", "shard": 1, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "distributed", "shard": 1, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable", "rerun_disabled_tests": "rerun_disabled_tests"}, {"config": "distributed", "shard": 1, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable", "mem_leak_check": "mem_leak_check"}, {"config": "distributed", "shard": 1, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "distributed", "shard": 2, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "distributed", "shard": 2, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable", "rerun_disabled_tests": "rerun_disabled_tests"}, {"config": "distributed", "shard": 2, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable", "mem_leak_check": "mem_leak_check"}, {"config": "distributed", "shard": 2, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "distributed", "shard": 3, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "distributed", "shard": 3, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable", "rerun_disabled_tests": "rerun_disabled_tests"}, {"config": "distributed", "shard": 3, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable", "mem_leak_check": "mem_leak_check"}, {"config": "distributed", "shard": 3, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}]} 2025-12-04T10:16:03.0335391Z Setting output is-test-matrix-empty=False 2025-12-04T10:16:03.0507851Z ##[group]Run echo "Filtered matrix:" 2025-12-04T10:16:03.0508360Z echo "Filtered matrix:" 2025-12-04T10:16:03.0531745Z echo "{"include": [{"config": "default", "shard": 1, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "default", "shard": 1, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable", "rerun_disabled_tests": "rerun_disabled_tests"}, {"config": "default", "shard": 1, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable", "mem_leak_check": "mem_leak_check"}, {"config": "default", "shard": 1, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "default", "shard": 2, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "default", "shard": 2, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable", "rerun_disabled_tests": "rerun_disabled_tests"}, {"config": "default", "shard": 2, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable", "mem_leak_check": "mem_leak_check"}, {"config": "default", "shard": 2, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "default", "shard": 3, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "default", "shard": 3, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable", "rerun_disabled_tests": "rerun_disabled_tests"}, {"config": "default", "shard": 3, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable", "mem_leak_check": "mem_leak_check"}, {"config": "default", "shard": 3, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "default", "shard": 4, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "default", "shard": 4, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable", "rerun_disabled_tests": "rerun_disabled_tests"}, {"config": "default", "shard": 4, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable", "mem_leak_check": "mem_leak_check"}, {"config": "default", "shard": 4, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "default", "shard": 5, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "default", "shard": 5, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable", "rerun_disabled_tests": "rerun_disabled_tests"}, {"config": "default", "shard": 5, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable", "mem_leak_check": "mem_leak_check"}, {"config": "default", "shard": 5, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "default", "shard": 6, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "default", "shard": 6, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable", "rerun_disabled_tests": "rerun_disabled_tests"}, {"config": "default", "shard": 6, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable", "mem_leak_check": "mem_leak_check"}, {"config": "default", "shard": 6, "num_shards": 6, "runner": "linux.rocm.gpu.gfx942.1.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "distributed", "shard": 1, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "distributed", "shard": 1, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable", "rerun_disabled_tests": "rerun_disabled_tests"}, {"config": "distributed", "shard": 1, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable", "mem_leak_check": "mem_leak_check"}, {"config": "distributed", "shard": 1, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "distributed", "shard": 2, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "distributed", "shard": 2, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable", "rerun_disabled_tests": "rerun_disabled_tests"}, {"config": "distributed", "shard": 2, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable", "mem_leak_check": "mem_leak_check"}, {"config": "distributed", "shard": 2, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}, {"config": "distributed", "shard": 3, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable"}, {"config": "distributed", "shard": 3, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "mem_leak_check": "mem_leak_check", "unstable": "unstable", "rerun_disabled_tests": "rerun_disabled_tests"}, {"config": "distributed", "shard": 3, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable", "mem_leak_check": "mem_leak_check"}, {"config": "distributed", "shard": 3, "num_shards": 3, "runner": "linux.rocm.gpu.gfx942.4.b", "rerun_disabled_tests": "rerun_disabled_tests", "unstable": "unstable"}]}" 2025-12-04T10:16:03.0554680Z  2025-12-04T10:16:03.0554956Z echo 2025-12-04T10:16:03.0555310Z echo "Is the current job unstable? True" 2025-12-04T10:16:03.0555727Z  2025-12-04T10:16:03.0555987Z echo 2025-12-04T10:16:03.0556322Z echo "Is keep-going label set? True" 2025-12-04T10:16:03.0556716Z  2025-12-04T10:16:03.0556979Z echo 2025-12-04T10:16:03.0557285Z echo "Reenabled issues? " 2025-12-04T10:16:03.0567000Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-12-04T10:16:03.0567471Z env: 2025-12-04T10:16:03.0567777Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:16:03.0568215Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T10:16:03.0568794Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T10:16:03.0569331Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T10:16:03.0571040Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T10:16:03.0572646Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T10:16:03.0573019Z AWS_REGION: us-east-1 2025-12-04T10:16:03.0573473Z AWS_ACCESS_KEY_ID: *** 2025-12-04T10:16:03.0574037Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T10:16:03.0581164Z AWS_SESSION_TOKEN: *** 2025-12-04T10:16:03.0581499Z ##[endgroup] 2025-12-04T10:16:03.0618704Z Filtered matrix: 2025-12-04T10:16:03.0643259Z {include: [{config: default, shard: 1, num_shards: 6, runner: linux.rocm.gpu.gfx942.1.b, mem_leak_check: mem_leak_check, unstable: unstable}, {config: default, shard: 1, num_shards: 6, runner: linux.rocm.gpu.gfx942.1.b, mem_leak_check: mem_leak_check, unstable: unstable, rerun_disabled_tests: rerun_disabled_tests}, {config: default, shard: 1, num_shards: 6, runner: linux.rocm.gpu.gfx942.1.b, rerun_disabled_tests: rerun_disabled_tests, unstable: unstable, mem_leak_check: mem_leak_check}, {config: default, shard: 1, num_shards: 6, runner: linux.rocm.gpu.gfx942.1.b, rerun_disabled_tests: rerun_disabled_tests, unstable: unstable}, {config: default, shard: 2, num_shards: 6, runner: linux.rocm.gpu.gfx942.1.b, mem_leak_check: mem_leak_check, unstable: unstable}, {config: default, shard: 2, num_shards: 6, runner: linux.rocm.gpu.gfx942.1.b, mem_leak_check: mem_leak_check, unstable: unstable, rerun_disabled_tests: rerun_disabled_tests}, {config: default, shard: 2, num_shards: 6, runner: linux.rocm.gpu.gfx942.1.b, rerun_disabled_tests: rerun_disabled_tests, unstable: unstable, mem_leak_check: mem_leak_check}, {config: default, shard: 2, num_shards: 6, runner: linux.rocm.gpu.gfx942.1.b, rerun_disabled_tests: rerun_disabled_tests, unstable: unstable}, {config: default, shard: 3, num_shards: 6, runner: linux.rocm.gpu.gfx942.1.b, mem_leak_check: mem_leak_check, unstable: unstable}, {config: default, shard: 3, num_shards: 6, runner: linux.rocm.gpu.gfx942.1.b, mem_leak_check: mem_leak_check, unstable: unstable, rerun_disabled_tests: rerun_disabled_tests}, {config: default, shard: 3, num_shards: 6, runner: linux.rocm.gpu.gfx942.1.b, rerun_disabled_tests: rerun_disabled_tests, unstable: unstable, mem_leak_check: mem_leak_check}, {config: default, shard: 3, num_shards: 6, runner: linux.rocm.gpu.gfx942.1.b, rerun_disabled_tests: rerun_disabled_tests, unstable: unstable}, {config: default, shard: 4, num_shards: 6, runner: linux.rocm.gpu.gfx942.1.b, mem_leak_check: mem_leak_check, unstable: unstable}, {config: default, shard: 4, num_shards: 6, runner: linux.rocm.gpu.gfx942.1.b, mem_leak_check: mem_leak_check, unstable: unstable, rerun_disabled_tests: rerun_disabled_tests}, {config: default, shard: 4, num_shards: 6, runner: linux.rocm.gpu.gfx942.1.b, rerun_disabled_tests: rerun_disabled_tests, unstable: unstable, mem_leak_check: mem_leak_check}, {config: default, shard: 4, num_shards: 6, runner: linux.rocm.gpu.gfx942.1.b, rerun_disabled_tests: rerun_disabled_tests, unstable: unstable}, {config: default, shard: 5, num_shards: 6, runner: linux.rocm.gpu.gfx942.1.b, mem_leak_check: mem_leak_check, unstable: unstable}, {config: default, shard: 5, num_shards: 6, runner: linux.rocm.gpu.gfx942.1.b, mem_leak_check: mem_leak_check, unstable: unstable, rerun_disabled_tests: rerun_disabled_tests}, {config: default, shard: 5, num_shards: 6, runner: linux.rocm.gpu.gfx942.1.b, rerun_disabled_tests: rerun_disabled_tests, unstable: unstable, mem_leak_check: mem_leak_check}, {config: default, shard: 5, num_shards: 6, runner: linux.rocm.gpu.gfx942.1.b, rerun_disabled_tests: rerun_disabled_tests, unstable: unstable}, {config: default, shard: 6, num_shards: 6, runner: linux.rocm.gpu.gfx942.1.b, mem_leak_check: mem_leak_check, unstable: unstable}, {config: default, shard: 6, num_shards: 6, runner: linux.rocm.gpu.gfx942.1.b, mem_leak_check: mem_leak_check, unstable: unstable, rerun_disabled_tests: rerun_disabled_tests}, {config: default, shard: 6, num_shards: 6, runner: linux.rocm.gpu.gfx942.1.b, rerun_disabled_tests: rerun_disabled_tests, unstable: unstable, mem_leak_check: mem_leak_check}, {config: default, shard: 6, num_shards: 6, runner: linux.rocm.gpu.gfx942.1.b, rerun_disabled_tests: rerun_disabled_tests, unstable: unstable}, {config: distributed, shard: 1, num_shards: 3, runner: linux.rocm.gpu.gfx942.4.b, mem_leak_check: mem_leak_check, unstable: unstable}, {config: distributed, shard: 1, num_shards: 3, runner: linux.rocm.gpu.gfx942.4.b, mem_leak_check: mem_leak_check, unstable: unstable, rerun_disabled_tests: rerun_disabled_tests}, {config: distributed, shard: 1, num_shards: 3, runner: linux.rocm.gpu.gfx942.4.b, rerun_disabled_tests: rerun_disabled_tests, unstable: unstable, mem_leak_check: mem_leak_check}, {config: distributed, shard: 1, num_shards: 3, runner: linux.rocm.gpu.gfx942.4.b, rerun_disabled_tests: rerun_disabled_tests, unstable: unstable}, {config: distributed, shard: 2, num_shards: 3, runner: linux.rocm.gpu.gfx942.4.b, mem_leak_check: mem_leak_check, unstable: unstable}, {config: distributed, shard: 2, num_shards: 3, runner: linux.rocm.gpu.gfx942.4.b, mem_leak_check: mem_leak_check, unstable: unstable, rerun_disabled_tests: rerun_disabled_tests}, {config: distributed, shard: 2, num_shards: 3, runner: linux.rocm.gpu.gfx942.4.b, rerun_disabled_tests: rerun_disabled_tests, unstable: unstable, mem_leak_check: mem_leak_check}, {config: distributed, shard: 2, num_shards: 3, runner: linux.rocm.gpu.gfx942.4.b, rerun_disabled_tests: rerun_disabled_tests, unstable: unstable}, {config: distributed, shard: 3, num_shards: 3, runner: linux.rocm.gpu.gfx942.4.b, mem_leak_check: mem_leak_check, unstable: unstable}, {config: distributed, shard: 3, num_shards: 3, runner: linux.rocm.gpu.gfx942.4.b, mem_leak_check: mem_leak_check, unstable: unstable, rerun_disabled_tests: rerun_disabled_tests}, {config: distributed, shard: 3, num_shards: 3, runner: linux.rocm.gpu.gfx942.4.b, rerun_disabled_tests: rerun_disabled_tests, unstable: unstable, mem_leak_check: mem_leak_check}, {config: distributed, shard: 3, num_shards: 3, runner: linux.rocm.gpu.gfx942.4.b, rerun_disabled_tests: rerun_disabled_tests, unstable: unstable}]} 2025-12-04T10:16:03.0665702Z 2025-12-04T10:16:03.0665854Z Is the current job unstable? True 2025-12-04T10:16:03.0666132Z 2025-12-04T10:16:03.0666281Z Is keep-going label set? True 2025-12-04T10:16:03.0666525Z 2025-12-04T10:16:03.0666654Z Reenabled issues? 2025-12-04T10:16:03.0722023Z ##[group]Run echo "timeout=$((JOB_TIMEOUT-30))" >> "${GITHUB_OUTPUT}" 2025-12-04T10:16:03.0722698Z echo "timeout=$((JOB_TIMEOUT-30))" >> "${GITHUB_OUTPUT}" 2025-12-04T10:16:03.0730950Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-12-04T10:16:03.0731421Z env: 2025-12-04T10:16:03.0731724Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:16:03.0732160Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T10:16:03.0732731Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T10:16:03.0733269Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T10:16:03.0734930Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T10:16:03.0736602Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T10:16:03.0736989Z AWS_REGION: us-east-1 2025-12-04T10:16:03.0737422Z AWS_ACCESS_KEY_ID: *** 2025-12-04T10:16:03.0737913Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T10:16:03.0745024Z AWS_SESSION_TOKEN: *** 2025-12-04T10:16:03.0745359Z JOB_TIMEOUT: 600 2025-12-04T10:16:03.0745667Z ##[endgroup] 2025-12-04T10:16:03.0817359Z ##[group]Run env | grep '^GITHUB' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2025-12-04T10:16:03.0818032Z env | grep '^GITHUB' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2025-12-04T10:16:03.0818616Z env | grep '^CI' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2025-12-04T10:16:03.0828822Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-12-04T10:16:03.0829293Z env: 2025-12-04T10:16:03.0829597Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:16:03.0830036Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T10:16:03.0830661Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T10:16:03.0831206Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T10:16:03.0832934Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T10:16:03.0834567Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T10:16:03.0834945Z AWS_REGION: us-east-1 2025-12-04T10:16:03.0835390Z AWS_ACCESS_KEY_ID: *** 2025-12-04T10:16:03.0835873Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T10:16:03.0842961Z AWS_SESSION_TOKEN: *** 2025-12-04T10:16:03.0843294Z ##[endgroup] 2025-12-04T10:16:03.0975686Z ##[group]Run set -x 2025-12-04T10:16:03.0976101Z set -x 2025-12-04T10:16:03.0976389Z  2025-12-04T10:16:03.0976730Z if [[ $TEST_CONFIG == 'multigpu' ]]; then 2025-12-04T10:16:03.0977233Z  TEST_COMMAND=.ci/pytorch/multigpu-test.sh 2025-12-04T10:16:03.0977732Z elif [[ $BUILD_ENVIRONMENT == *onnx* ]]; then 2025-12-04T10:16:03.0978190Z  TEST_COMMAND=.ci/caffe2/test.sh 2025-12-04T10:16:03.0978571Z else 2025-12-04T10:16:03.0978921Z  TEST_COMMAND=.ci/pytorch/test.sh 2025-12-04T10:16:03.0979303Z fi 2025-12-04T10:16:03.0979570Z  2025-12-04T10:16:03.0979993Z # detached container should get cleaned up by teardown_ec2_linux 2025-12-04T10:16:03.0980712Z # TODO: Stop building test binaries as part of the build phase 2025-12-04T10:16:03.0981296Z # Used for GPU_FLAG since that doesn't play nice 2025-12-04T10:16:03.0981851Z # shellcheck disable=SC2086,SC2090 2025-12-04T10:16:03.0982276Z container_name=$(docker run \ 2025-12-04T10:16:03.0982685Z  ${GPU_FLAG:-} \ 2025-12-04T10:16:03.0983056Z  -e BUILD_ENVIRONMENT \ 2025-12-04T10:16:03.0983443Z  -e PR_NUMBER \ 2025-12-04T10:16:03.0983798Z  -e GITHUB_ACTIONS \ 2025-12-04T10:16:03.0984173Z  -e GITHUB_REPOSITORY \ 2025-12-04T10:16:03.0984555Z  -e GITHUB_WORKFLOW \ 2025-12-04T10:16:03.1010107Z  -e GITHUB_JOB \ 2025-12-04T10:16:03.1010896Z  -e GITHUB_RUN_ID \ 2025-12-04T10:16:03.1011282Z  -e GITHUB_RUN_NUMBER \ 2025-12-04T10:16:03.1011682Z  -e GITHUB_RUN_ATTEMPT \ 2025-12-04T10:16:03.1012064Z  -e JOB_ID \ 2025-12-04T10:16:03.1012404Z  -e JOB_NAME \ 2025-12-04T10:16:03.1012749Z  -e BASE_SHA \ 2025-12-04T10:16:03.1013082Z  -e BRANCH \ 2025-12-04T10:16:03.1013407Z  -e SHA1 \ 2025-12-04T10:16:03.1013746Z  -e AWS_DEFAULT_REGION \ 2025-12-04T10:16:03.1014126Z  -e IN_WHEEL_TEST \ 2025-12-04T10:16:03.1014486Z  -e SHARD_NUMBER \ 2025-12-04T10:16:03.1014843Z  -e TEST_CONFIG \ 2025-12-04T10:16:03.1015201Z  -e NUM_TEST_SHARDS \ 2025-12-04T10:16:03.1015580Z  -e REENABLED_ISSUES \ 2025-12-04T10:16:03.1015974Z  -e CONTINUE_THROUGH_ERROR \ 2025-12-04T10:16:03.1016379Z  -e VERBOSE_TEST_LOGS \ 2025-12-04T10:16:03.1016754Z  -e TEST_SHOWLOCALS \ 2025-12-04T10:16:03.1017127Z  -e NO_TEST_TIMEOUT \ 2025-12-04T10:16:03.1017484Z  -e NO_TD \ 2025-12-04T10:16:03.1017866Z  -e MAX_JOBS="$(nproc --ignore=2)" \ 2025-12-04T10:16:03.1018330Z  -e PYTORCH_TEST_CUDA_MEM_LEAK_CHECK \ 2025-12-04T10:16:03.1018790Z  -e PYTORCH_TEST_RERUN_DISABLED_TESTS \ 2025-12-04T10:16:03.1019226Z  -e TESTS_TO_INCLUDE \ 2025-12-04T10:16:03.1019612Z  -e HUGGING_FACE_HUB_TOKEN \ 2025-12-04T10:16:03.1020014Z  -e DASHBOARD_TAG \ 2025-12-04T10:16:03.1020489Z  --env-file="${RUNNER_TEMP}/github_env_${GITHUB_RUN_ID}" \ 2025-12-04T10:16:03.1021105Z  --ulimit stack=10485760:83886080 \ 2025-12-04T10:16:03.1021511Z  --ulimit core=0 \ 2025-12-04T10:16:03.1021942Z  --env-file="/tmp/github_env_${GITHUB_RUN_ID}" \ 2025-12-04T10:16:03.1022441Z  --security-opt seccomp=unconfined \ 2025-12-04T10:16:03.1022878Z  --cap-add=SYS_PTRACE \ 2025-12-04T10:16:03.1023264Z  --shm-size="8g" \ 2025-12-04T10:16:03.1023605Z  --tty \ 2025-12-04T10:16:03.1023920Z  --detach \ 2025-12-04T10:16:03.1024275Z  --name="${container_name}" \ 2025-12-04T10:16:03.1024672Z  --user jenkins \ 2025-12-04T10:16:03.1025121Z  -v "${GITHUB_WORKSPACE}:/var/lib/jenkins/workspace" \ 2025-12-04T10:16:03.1025622Z  -w /var/lib/jenkins/workspace \ 2025-12-04T10:16:03.1026228Z  "${DOCKER_IMAGE}" 2025-12-04T10:16:03.1026563Z ) 2025-12-04T10:16:03.1026892Z # save container name for later step 2025-12-04T10:16:03.1027412Z echo "CONTAINER_NAME=${container_name}" >> "$GITHUB_ENV" 2025-12-04T10:16:03.1028301Z # jenkins user does not have write permission to mounted workspace; work-around by copying within container to jenkins home 2025-12-04T10:16:03.1029427Z docker exec -t "${container_name}" sh -c "cd .. && cp -R workspace pytorch && cd pytorch && pip install dist/*.whl && ${TEST_COMMAND}" 2025-12-04T10:16:03.1038332Z shell: /usr/bin/bash -e {0} 2025-12-04T10:16:03.1038679Z env: 2025-12-04T10:16:03.1038969Z GIT_DEFAULT_BRANCH: main 2025-12-04T10:16:03.1039405Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T10:16:03.1039986Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T10:16:03.1040525Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T10:16:03.1042253Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T10:16:03.1043862Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T10:16:03.1044232Z AWS_REGION: us-east-1 2025-12-04T10:16:03.1045132Z AWS_ACCESS_KEY_ID: *** 2025-12-04T10:16:03.1045750Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T10:16:03.1053053Z AWS_SESSION_TOKEN: *** 2025-12-04T10:16:03.1053445Z BUILD_ENVIRONMENT: linux-jammy-rocm-py3.10 2025-12-04T10:16:03.1053854Z PR_NUMBER: 2025-12-04T10:16:03.1054177Z GITHUB_REPOSITORY: pytorch/pytorch 2025-12-04T10:16:03.1054592Z GITHUB_WORKFLOW: trunk-rocm-mi300 2025-12-04T10:16:03.1054965Z GITHUB_JOB: test 2025-12-04T10:16:03.1055281Z GITHUB_RUN_ID: 19922849170 2025-12-04T10:16:03.1055631Z GITHUB_RUN_NUMBER: 689 2025-12-04T10:16:03.1055973Z GITHUB_RUN_ATTEMPT: 1 2025-12-04T10:16:03.1056295Z JOB_ID: 57116213174 2025-12-04T10:16:03.1056945Z JOB_NAME: linux-jammy-rocm-py3.10 / test (distributed, 1, 3, linux.rocm.gpu.gfx942.4.b, mem_leak_check, unstable) 2025-12-04T10:16:03.1057620Z BRANCH: main 2025-12-04T10:16:03.1057972Z SHA1: ffd9b0fb4355e97af82fc42cf185c3ffa0fc0a32 2025-12-04T10:16:03.1058465Z BASE_SHA: ffd9b0fb4355e97af82fc42cf185c3ffa0fc0a32 2025-12-04T10:16:03.1058909Z TEST_CONFIG: distributed 2025-12-04T10:16:03.1059242Z SHARD_NUMBER: 1 2025-12-04T10:16:03.1059551Z NUM_TEST_SHARDS: 3 2025-12-04T10:16:03.1059862Z REENABLED_ISSUES: 2025-12-04T10:16:03.1060191Z CONTINUE_THROUGH_ERROR: True 2025-12-04T10:16:03.1060556Z VERBOSE_TEST_LOGS: False 2025-12-04T10:16:03.1060950Z TEST_SHOWLOCALS: False 2025-12-04T10:16:03.1061288Z NO_TEST_TIMEOUT: False 2025-12-04T10:16:03.1061606Z NO_TD: False 2025-12-04T10:16:03.1062472Z DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-rocm-n-py3-f0cd68561080d537ef3d3d6f81b25a6416ad600a 2025-12-04T10:16:03.1063425Z PYTORCH_TEST_CUDA_MEM_LEAK_CHECK: 1 2025-12-04T10:16:03.1063842Z PYTORCH_TEST_RERUN_DISABLED_TESTS: 0 2025-12-04T10:16:03.1064225Z TESTS_TO_INCLUDE: 2025-12-04T10:16:03.1064531Z DASHBOARD_TAG: 2025-12-04T10:16:03.1064978Z HUGGING_FACE_HUB_TOKEN: *** 2025-12-04T10:16:03.1065328Z ##[endgroup] 2025-12-04T10:16:03.1088960Z + [[ distributed == \m\u\l\t\i\g\p\u ]] 2025-12-04T10:16:03.1089416Z + [[ linux-jammy-rocm-py3.10 == *onnx* ]] 2025-12-04T10:16:03.1089854Z + TEST_COMMAND=.ci/pytorch/test.sh 2025-12-04T10:16:03.1099467Z +++ nproc --ignore=2 2025-12-04T10:16:03.1117860Z ++ docker run --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host -e BUILD_ENVIRONMENT -e PR_NUMBER -e GITHUB_ACTIONS -e GITHUB_REPOSITORY -e GITHUB_WORKFLOW -e GITHUB_JOB -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e JOB_ID -e JOB_NAME -e BASE_SHA -e BRANCH -e SHA1 -e AWS_DEFAULT_REGION -e IN_WHEEL_TEST -e SHARD_NUMBER -e TEST_CONFIG -e NUM_TEST_SHARDS -e REENABLED_ISSUES -e CONTINUE_THROUGH_ERROR -e VERBOSE_TEST_LOGS -e TEST_SHOWLOCALS -e NO_TEST_TIMEOUT -e NO_TD -e MAX_JOBS=126 -e PYTORCH_TEST_CUDA_MEM_LEAK_CHECK -e PYTORCH_TEST_RERUN_DISABLED_TESTS -e TESTS_TO_INCLUDE -e HUGGING_FACE_HUB_TOKEN -e DASHBOARD_TAG --env-file=/home/runner/_work/_temp/github_env_19922849170 --ulimit stack=10485760:83886080 --ulimit core=0 --env-file=/tmp/github_env_19922849170 --security-opt seccomp=unconfined --cap-add=SYS_PTRACE --shm-size=8g --tty --detach --name= --user jenkins -v /home/runner/_work/pytorch/pytorch:/var/lib/jenkins/workspace -w /var/lib/jenkins/workspace 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-rocm-n-py3-f0cd68561080d537ef3d3d6f81b25a6416ad600a 2025-12-04T10:16:03.3753433Z + container_name=f376f08e81f7dfe3b6a525fadd8605d64876caf592501f7ac6f3aa383436ff61 2025-12-04T10:16:03.3754323Z + echo CONTAINER_NAME=f376f08e81f7dfe3b6a525fadd8605d64876caf592501f7ac6f3aa383436ff61 2025-12-04T10:16:03.3755628Z + docker exec -t f376f08e81f7dfe3b6a525fadd8605d64876caf592501f7ac6f3aa383436ff61 sh -c 'cd .. && cp -R workspace pytorch && cd pytorch && pip install dist/*.whl && .ci/pytorch/test.sh' 2025-12-04T10:16:06.5268770Z Processing ./dist/torch-2.10.0a0+gitffd9b0f-cp310-cp310-linux_x86_64.whl 2025-12-04T10:16:07.0629694Z Requirement already satisfied: filelock in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch==2.10.0a0+gitffd9b0f) (3.18.0) 2025-12-04T10:16:07.0631644Z Requirement already satisfied: typing-extensions>=4.10.0 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch==2.10.0a0+gitffd9b0f) (4.12.2) 2025-12-04T10:16:07.0633174Z Requirement already satisfied: sympy>=1.13.3 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch==2.10.0a0+gitffd9b0f) (1.13.3) 2025-12-04T10:16:07.0634452Z Requirement already satisfied: networkx>=2.5.1 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch==2.10.0a0+gitffd9b0f) (2.8.8) 2025-12-04T10:16:07.0635679Z Requirement already satisfied: jinja2 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch==2.10.0a0+gitffd9b0f) (3.1.6) 2025-12-04T10:16:07.0636932Z Requirement already satisfied: fsspec>=0.8.5 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch==2.10.0a0+gitffd9b0f) (2025.10.0) 2025-12-04T10:16:07.0799282Z Requirement already satisfied: mpmath<1.4,>=1.1.0 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from sympy>=1.13.3->torch==2.10.0a0+gitffd9b0f) (1.3.0) 2025-12-04T10:16:07.0822280Z Requirement already satisfied: MarkupSafe>=2.0 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from jinja2->torch==2.10.0a0+gitffd9b0f) (3.0.3) 2025-12-04T10:16:07.2790980Z Installing collected packages: torch 2025-12-04T10:16:12.7910841Z Successfully installed torch-2.10.0a0+gitffd9b0f 2025-12-04T10:16:12.8282715Z + export TERM=vt100 2025-12-04T10:16:12.8283093Z + TERM=vt100 2025-12-04T10:16:12.8289203Z ++ dirname .ci/pytorch/test.sh 2025-12-04T10:16:12.8306959Z + source .ci/pytorch/common.sh 2025-12-04T10:16:12.8313594Z +++ dirname .ci/pytorch/common.sh 2025-12-04T10:16:12.8329053Z ++ source .ci/pytorch/common_utils.sh 2025-12-04T10:16:12.8330230Z +++ declare -f -t trap_add 2025-12-04T10:16:12.8336745Z ++ set -ex -o pipefail 2025-12-04T10:16:12.8337149Z ++ [[ linux-jammy-rocm-py3.10 == *rocm* ]] 2025-12-04T10:16:12.8337581Z ++ unset HIP_PLATFORM 2025-12-04T10:16:12.8337946Z ++ export PYTORCH_TEST_WITH_ROCM=1 2025-12-04T10:16:12.8338336Z ++ PYTORCH_TEST_WITH_ROCM=1 2025-12-04T10:16:12.8338694Z ++ BUILD_TEST_LIBTORCH=0 2025-12-04T10:16:12.8343304Z ++ dirname .ci/pytorch/test.sh 2025-12-04T10:16:12.8358485Z + source .ci/pytorch/common-build.sh 2025-12-04T10:16:12.8361066Z ++ [[ linux-jammy-rocm-py3.10 != *win-* ]] 2025-12-04T10:16:12.8373694Z ++++ dirname .ci/pytorch/common-build.sh 2025-12-04T10:16:12.8388687Z +++ cd .ci/pytorch 2025-12-04T10:16:12.8390396Z +++ pwd -P 2025-12-04T10:16:12.8393110Z ++ script_dir=/var/lib/jenkins/pytorch/.ci/pytorch 2025-12-04T10:16:12.8393793Z ++ [[ linux-jammy-rocm-py3.10 == *-pch* ]] 2025-12-04T10:16:12.8394153Z ++ which sccache 2025-12-04T10:16:12.8411711Z ++ [[ -z '' ]] 2025-12-04T10:16:12.8412051Z ++ unset SCCACHE_BUCKET 2025-12-04T10:16:12.8412399Z ++ unset SCCACHE_REGION 2025-12-04T10:16:12.8412739Z ++ sccache --stop-server 2025-12-04T10:16:12.8427656Z ++ true 2025-12-04T10:16:12.8427987Z ++ rm -f /var/lib/jenkins/sccache_error.log 2025-12-04T10:16:12.8445095Z ++ trap_add sccache_epilogue EXIT 2025-12-04T10:16:12.8445492Z ++ trap_add_cmd=sccache_epilogue 2025-12-04T10:16:12.8445854Z ++ shift 2025-12-04T10:16:12.8446153Z ++ for trap_add_name in "$@" 2025-12-04T10:16:12.8455160Z ++++ trap -p EXIT 2025-12-04T10:16:12.8458812Z +++ eval 'extract_trap_cmd ' 2025-12-04T10:16:12.8459172Z ++++ extract_trap_cmd 2025-12-04T10:16:12.8459502Z ++++ printf '%s\n' '' 2025-12-04T10:16:12.8459847Z +++ printf '%s\n' sccache_epilogue 2025-12-04T10:16:12.8462179Z ++ trap -- ' 2025-12-04T10:16:12.8462518Z sccache_epilogue' EXIT 2025-12-04T10:16:12.8462836Z ++ [[ -n '' ]] 2025-12-04T10:16:12.8463159Z ++ [[ linux-jammy-rocm-py3.10 == *rocm* ]] 2025-12-04T10:16:12.8463636Z ++ SCCACHE_ERROR_LOG=/var/lib/jenkins/sccache_error.log 2025-12-04T10:16:12.8464095Z ++ SCCACHE_IDLE_TIMEOUT=0 2025-12-04T10:16:12.8464438Z ++ sccache --start-server 2025-12-04T10:16:12.8488539Z sccache: Starting the server... 2025-12-04T10:16:12.8691454Z sccache: Listening on address 127.0.0.1:4226 2025-12-04T10:16:12.8698314Z ++ sccache --zero-stats 2025-12-04T10:16:12.8711792Z Statistics zeroed. 2025-12-04T10:16:12.8716225Z ++ which ccache 2025-12-04T10:16:12.8726087Z + [[ linux-jammy-rocm-py3.10 != *rocm* ]] 2025-12-04T10:16:12.8726231Z + [[ linux-jammy-rocm-py3.10 == *cuda* ]] 2025-12-04T10:16:12.8726359Z + echo 'Environment variables:' 2025-12-04T10:16:12.8726478Z Environment variables: 2025-12-04T10:16:12.8726577Z + env 2025-12-04T10:16:12.8732528Z GITHUB_WORKSPACE=/home/runner/_work/pytorch/pytorch 2025-12-04T10:16:12.8732694Z CONTINUE_THROUGH_ERROR=True 2025-12-04T10:16:12.8732822Z BUILD_ENVIRONMENT=linux-jammy-rocm-py3.10 2025-12-04T10:16:12.8732993Z HOSTNAME=linux.rocm.gpu.gfx942.4.b-bphpw-runner-mcn25 2025-12-04T10:16:12.8733236Z GITHUB_PATH=/home/runner/_work/_temp/_runner_file_commands/add_path_3095bd82-0065-4572-9784-f6f76da4d44f 2025-12-04T10:16:12.8733445Z GITHUB_ACTION=__run_2 2025-12-04T10:16:12.8733554Z PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 2025-12-04T10:16:12.8733672Z GITHUB_RUN_NUMBER=689 2025-12-04T10:16:12.8733773Z TEST_CONFIG=distributed 2025-12-04T10:16:12.8733912Z RUNNER_NAME=linux.rocm.gpu.gfx942.4.b-bphpw-runner-mcn25 2025-12-04T10:16:12.8734065Z GITHUB_REPOSITORY_OWNER_ID=21003710 2025-12-04T10:16:12.8734192Z AWS_DEFAULT_REGION=us-east-1 2025-12-04T10:16:12.8734333Z RUNNER_ARTIFACT_DIR=/home/runner/_work/_temp/artifacts 2025-12-04T10:16:12.8734480Z GITHUB_TRIGGERING_ACTOR=pytorchmergebot 2025-12-04T10:16:12.8734603Z GITHUB_REF_TYPE=branch 2025-12-04T10:16:12.8734741Z BASE_SHA=ffd9b0fb4355e97af82fc42cf185c3ffa0fc0a32 2025-12-04T10:16:12.8735002Z HUGGING_FACE_HUB_TOKEN=*** 2025-12-04T10:16:12.8735441Z *** 2025-12-04T10:16:12.8735537Z GITHUB_REPOSITORY_ID=65600975 2025-12-04T10:16:12.8735649Z GITHUB_ACTIONS=true 2025-12-04T10:16:12.8735772Z SHA1=ffd9b0fb4355e97af82fc42cf185c3ffa0fc0a32 2025-12-04T10:16:12.8735931Z GITHUB_SHA=ffd9b0fb4355e97af82fc42cf185c3ffa0fc0a32 2025-12-04T10:16:12.8736158Z GITHUB_WORKFLOW_REF=pytorch/pytorch/.github/workflows/trunk-rocm-mi300.yml@refs/heads/main 2025-12-04T10:16:12.8736348Z UCC_HOME=/usr 2025-12-04T10:16:12.8736448Z RUNNER_ENVIRONMENT=self-hosted 2025-12-04T10:16:12.8736559Z VERBOSE_TEST_LOGS=False 2025-12-04T10:16:12.8736664Z GITHUB_REF=refs/heads/main 2025-12-04T10:16:12.8736767Z RUNNER_OS=Linux 2025-12-04T10:16:12.8736859Z SHARD_NUMBER=1 2025-12-04T10:16:12.8736959Z GITHUB_REF_PROTECTED=true 2025-12-04T10:16:12.8737221Z RUNNER_MANUALLY_TRAP_SIG=1 2025-12-04T10:16:12.8737329Z HOME=/var/lib/jenkins 2025-12-04T10:16:12.8737448Z GITHUB_API_URL=https://api.github.com 2025-12-04T10:16:12.8737583Z PYTORCH_TEST_RERUN_DISABLED_TESTS=0 2025-12-04T10:16:12.8737715Z RUNNER_DOCS_DIR=/home/runner/_work/_temp/docs 2025-12-04T10:16:12.8737845Z LANG=C.UTF-8 2025-12-04T10:16:12.8737956Z UCX_COMMIT=29831d319e6be55cb8c768ca61de335c934ca39e 2025-12-04T10:16:12.8738093Z PYTORCH_TEST_WITH_ROCM=1 2025-12-04T10:16:12.8738234Z RUNNER_TRACKING_ID=github_b1cf6206-fd9d-4d9d-b5a1-7cf910bd136f 2025-12-04T10:16:12.8738384Z RUNNER_ARCH=X64 2025-12-04T10:16:12.8738485Z RUNNER_TEMP=/home/runner/_work/_temp 2025-12-04T10:16:12.8738650Z NUM_TEST_SHARDS=3 2025-12-04T10:16:12.8738744Z UCX_HOME=/usr 2025-12-04T10:16:12.8738929Z GITHUB_STATE=/home/runner/_work/_temp/_runner_file_commands/save_state_3095bd82-0065-4572-9784-f6f76da4d44f 2025-12-04T10:16:12.8739238Z JOB_NAME=linux-jammy-rocm-py3.10 / test (distributed, 1, 3, linux.rocm.gpu.gfx942.4.b, mem_leak_check, unstable) 2025-12-04T10:16:12.8739454Z MAGMA_HOME=/opt/rocm/magma 2025-12-04T10:16:12.8739642Z GITHUB_ENV=/home/runner/_work/_temp/_runner_file_commands/set_env_3095bd82-0065-4572-9784-f6f76da4d44f 2025-12-04T10:16:12.8739886Z GITHUB_EVENT_PATH=/home/runner/_work/_temp/_github_workflow/event.json 2025-12-04T10:16:12.8740046Z GITHUB_EVENT_NAME=schedule 2025-12-04T10:16:12.8740202Z GITHUB_ACTIONS_RUNNER_EXTRA_USER_AGENT=actions-runner-controller/0.12.1 2025-12-04T10:16:12.8740364Z DASHBOARD_TAG= 2025-12-04T10:16:12.8740502Z GITHUB_RUN_ID=19922849170 2025-12-04T10:16:12.8741097Z GITHUB_STEP_SUMMARY=/home/runner/_work/_temp/_runner_file_commands/step_summary_3095bd82-0065-4572-9784-f6f76da4d44f 2025-12-04T10:16:12.8741319Z GITHUB_ACTOR=pytorchmergebot 2025-12-04T10:16:12.8741426Z PR_NUMBER= 2025-12-04T10:16:12.8741518Z GITHUB_RUN_ATTEMPT=1 2025-12-04T10:16:12.8741626Z ANACONDA_PYTHON_VERSION=3.10 2025-12-04T10:16:12.8741761Z GITHUB_GRAPHQL_URL=https://api.github.com/graphql 2025-12-04T10:16:12.8741895Z TERM=vt100 2025-12-04T10:16:12.8741986Z INSTALLED_VISION=yes 2025-12-04T10:16:12.8742085Z BRANCH=main 2025-12-04T10:16:12.8742181Z OPENSSL_ROOT_DIR=/opt/openssl 2025-12-04T10:16:12.8742292Z TESTS_TO_INCLUDE= 2025-12-04T10:16:12.8742450Z GITHUB_ACTION_PATH=/home/runner/_work/pytorch/pytorch/./.github/actions/setup-rocm 2025-12-04T10:16:12.8742637Z GITHUB_SERVER_URL=https://github.com 2025-12-04T10:16:12.8742772Z PYTORCH_ROCM_ARCH=gfx90a;gfx942;gfx950;gfx1100 2025-12-04T10:16:12.8742920Z UCC_COMMIT=9f4b242cbbd8b1462cbc732eb29316cdfa124b77 2025-12-04T10:16:12.8743056Z REENABLED_ISSUES= 2025-12-04T10:16:12.8743149Z SHLVL=1 2025-12-04T10:16:12.8743235Z MAX_JOBS=126 2025-12-04T10:16:12.8743361Z RUNNER_TEST_RESULTS_DIR=/home/runner/_work/_temp/test-results 2025-12-04T10:16:12.8743511Z GITHUB_ACTOR_ID=97764156 2025-12-04T10:16:12.8743644Z RUNNER_TOOL_CACHE=/home/runner/_work/_tool 2025-12-04T10:16:12.8743803Z GITHUB_WORKFLOW_SHA=ffd9b0fb4355e97af82fc42cf185c3ffa0fc0a32 2025-12-04T10:16:12.8743954Z GITHUB_REF_NAME=main 2025-12-04T10:16:12.8744055Z ROCM_PATH=/opt/rocm 2025-12-04T10:16:12.8744152Z GITHUB_JOB=test 2025-12-04T10:16:12.8744246Z NO_TEST_TIMEOUT=False 2025-12-04T10:16:12.8744355Z GITHUB_REPOSITORY=pytorch/pytorch 2025-12-04T10:16:12.8744473Z LC_ALL=C.UTF-8 2025-12-04T10:16:12.8744569Z GITHUB_RETENTION_DAYS=90 2025-12-04T10:16:12.8744685Z RUNNER_WORKSPACE=/home/runner/_work/pytorch 2025-12-04T10:16:12.8744812Z OPENSSL_DIR=/opt/openssl 2025-12-04T10:16:12.8744922Z GITHUB_ACTION_REPOSITORY= 2025-12-04T10:16:12.8745278Z PATH=/opt/cache/bin:/opt/rocm/llvm/bin:/opt/rocm/opencl/bin:/opt/rocm/hip/bin:/opt/rocm/hcc/bin:/opt/rocm/bin:/opt/conda/envs/py_3.10/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2025-12-04T10:16:12.8745628Z GITHUB_BASE_REF= 2025-12-04T10:16:12.8745721Z CI=true 2025-12-04T10:16:12.8745815Z GITHUB_REPOSITORY_OWNER=pytorch 2025-12-04T10:16:12.8745926Z JOB_ID=57116213174 2025-12-04T10:16:12.8746019Z GITHUB_HEAD_REF= 2025-12-04T10:16:12.8746112Z GITHUB_ACTION_REF= 2025-12-04T10:16:12.8746259Z TEST_SHOWLOCALS=False 2025-12-04T10:16:12.8746375Z GITHUB_WORKFLOW=trunk-rocm-mi300 2025-12-04T10:16:12.8746496Z DEBIAN_FRONTEND=noninteractive 2025-12-04T10:16:12.8746701Z GITHUB_OUTPUT=/home/runner/_work/_temp/_runner_file_commands/set_output_3095bd82-0065-4572-9784-f6f76da4d44f 2025-12-04T10:16:12.8746906Z NO_TD=False 2025-12-04T10:16:12.8746997Z OLDPWD=/var/lib/jenkins 2025-12-04T10:16:12.8747097Z _=/usr/bin/env 2025-12-04T10:16:12.8747224Z ++ python -c 'import site; print(site.getsitepackages()[0])' 2025-12-04T10:16:12.8813359Z + TORCH_INSTALL_DIR=/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch 2025-12-04T10:16:12.8814308Z + TORCH_BIN_DIR=/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin 2025-12-04T10:16:12.8815122Z + TORCH_LIB_DIR=/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/lib 2025-12-04T10:16:12.8815840Z + TORCH_TEST_DIR=/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/test 2025-12-04T10:16:12.8816392Z + BUILD_DIR=build 2025-12-04T10:16:12.8816796Z + BUILD_RENAMED_DIR=build_renamed 2025-12-04T10:16:12.8817191Z + BUILD_BIN_DIR=build/bin 2025-12-04T10:16:12.8817553Z + SHARD_NUMBER=1 2025-12-04T10:16:12.8817853Z + NUM_TEST_SHARDS=3 2025-12-04T10:16:12.8818199Z + export TORCH_SERIALIZATION_DEBUG=1 2025-12-04T10:16:12.8818611Z + TORCH_SERIALIZATION_DEBUG=1 2025-12-04T10:16:12.8818969Z + export VALGRIND=ON 2025-12-04T10:16:12.8819289Z + VALGRIND=ON 2025-12-04T10:16:12.8819623Z + [[ linux-jammy-rocm-py3.10 == *clang9* ]] 2025-12-04T10:16:12.8820662Z + [[ linux-jammy-rocm-py3.10 == *xpu* ]] 2025-12-04T10:16:12.8821049Z + detect_cuda_arch 2025-12-04T10:16:12.8821377Z + [[ linux-jammy-rocm-py3.10 == *cuda* ]] 2025-12-04T10:16:12.8821794Z + [[ linux-jammy-rocm-py3.10 == *s390x* ]] 2025-12-04T10:16:12.8822168Z + [[ 0 == \1 ]] 2025-12-04T10:16:12.8822451Z + [[ True == \1 ]] 2025-12-04T10:16:12.8822789Z + [[ linux-jammy-rocm-py3.10 != *bazel* ]] 2025-12-04T10:16:12.8823213Z ++ realpath build/custom_test_artifacts 2025-12-04T10:16:12.8838269Z + CUSTOM_TEST_ARTIFACT_BUILD_DIR=/var/lib/jenkins/pytorch/build/custom_test_artifacts 2025-12-04T10:16:12.8838865Z + [[ -n '' ]] 2025-12-04T10:16:12.8839180Z + echo 'Environment variables' 2025-12-04T10:16:12.8839538Z Environment variables 2025-12-04T10:16:12.8839845Z + env 2025-12-04T10:16:12.8848464Z GITHUB_WORKSPACE=/home/runner/_work/pytorch/pytorch 2025-12-04T10:16:12.8848971Z CONTINUE_THROUGH_ERROR=True 2025-12-04T10:16:12.8849372Z BUILD_ENVIRONMENT=linux-jammy-rocm-py3.10 2025-12-04T10:16:12.8849889Z HOSTNAME=linux.rocm.gpu.gfx942.4.b-bphpw-runner-mcn25 2025-12-04T10:16:12.8850870Z GITHUB_PATH=/home/runner/_work/_temp/_runner_file_commands/add_path_3095bd82-0065-4572-9784-f6f76da4d44f 2025-12-04T10:16:12.8851527Z GITHUB_ACTION=__run_2 2025-12-04T10:16:12.8851885Z PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 2025-12-04T10:16:12.8852267Z GITHUB_RUN_NUMBER=689 2025-12-04T10:16:12.8852590Z TEST_CONFIG=distributed 2025-12-04T10:16:12.8853027Z RUNNER_NAME=linux.rocm.gpu.gfx942.4.b-bphpw-runner-mcn25 2025-12-04T10:16:12.8853530Z GITHUB_REPOSITORY_OWNER_ID=21003710 2025-12-04T10:16:12.8853936Z AWS_DEFAULT_REGION=us-east-1 2025-12-04T10:16:12.8854376Z RUNNER_ARTIFACT_DIR=/home/runner/_work/_temp/artifacts 2025-12-04T10:16:12.8854850Z GITHUB_TRIGGERING_ACTOR=pytorchmergebot 2025-12-04T10:16:12.8855249Z GITHUB_REF_TYPE=branch 2025-12-04T10:16:12.8855634Z BASE_SHA=ffd9b0fb4355e97af82fc42cf185c3ffa0fc0a32 2025-12-04T10:16:12.8856362Z HUGGING_FACE_HUB_TOKEN=*** 2025-12-04T10:16:12.8856809Z *** 2025-12-04T10:16:12.8857105Z GITHUB_REPOSITORY_ID=65600975 2025-12-04T10:16:12.8857468Z GITHUB_ACTIONS=true 2025-12-04T10:16:12.8857830Z SHA1=ffd9b0fb4355e97af82fc42cf185c3ffa0fc0a32 2025-12-04T10:16:12.8858314Z GITHUB_SHA=ffd9b0fb4355e97af82fc42cf185c3ffa0fc0a32 2025-12-04T10:16:12.8859004Z GITHUB_WORKFLOW_REF=pytorch/pytorch/.github/workflows/trunk-rocm-mi300.yml@refs/heads/main 2025-12-04T10:16:12.8859615Z UCC_HOME=/usr 2025-12-04T10:16:12.8859926Z TORCH_SERIALIZATION_DEBUG=1 2025-12-04T10:16:12.8860298Z RUNNER_ENVIRONMENT=self-hosted 2025-12-04T10:16:12.8860974Z VERBOSE_TEST_LOGS=False 2025-12-04T10:16:12.8861313Z GITHUB_REF=refs/heads/main 2025-12-04T10:16:12.8861646Z RUNNER_OS=Linux 2025-12-04T10:16:12.8861935Z SHARD_NUMBER=1 2025-12-04T10:16:12.8862243Z GITHUB_REF_PROTECTED=true 2025-12-04T10:16:12.8862594Z RUNNER_MANUALLY_TRAP_SIG=1 2025-12-04T10:16:12.8862933Z HOME=/var/lib/jenkins 2025-12-04T10:16:12.8863309Z GITHUB_API_URL=https://api.github.com 2025-12-04T10:16:12.8863740Z PYTORCH_TEST_RERUN_DISABLED_TESTS=0 2025-12-04T10:16:12.8864162Z RUNNER_DOCS_DIR=/home/runner/_work/_temp/docs 2025-12-04T10:16:12.8864569Z LANG=C.UTF-8 2025-12-04T10:16:12.8864926Z UCX_COMMIT=29831d319e6be55cb8c768ca61de335c934ca39e 2025-12-04T10:16:12.8865360Z PYTORCH_TEST_WITH_ROCM=1 2025-12-04T10:16:12.8865821Z RUNNER_TRACKING_ID=github_b1cf6206-fd9d-4d9d-b5a1-7cf910bd136f 2025-12-04T10:16:12.8866298Z RUNNER_ARCH=X64 2025-12-04T10:16:12.8866708Z RUNNER_TEMP=/home/runner/_work/_temp 2025-12-04T10:16:12.8867083Z NUM_TEST_SHARDS=3 2025-12-04T10:16:12.8867385Z UCX_HOME=/usr 2025-12-04T10:16:12.8867969Z GITHUB_STATE=/home/runner/_work/_temp/_runner_file_commands/save_state_3095bd82-0065-4572-9784-f6f76da4d44f 2025-12-04T10:16:12.8868973Z JOB_NAME=linux-jammy-rocm-py3.10 / test (distributed, 1, 3, linux.rocm.gpu.gfx942.4.b, mem_leak_check, unstable) 2025-12-04T10:16:12.8869668Z MAGMA_HOME=/opt/rocm/magma 2025-12-04T10:16:12.8870277Z GITHUB_ENV=/home/runner/_work/_temp/_runner_file_commands/set_env_3095bd82-0065-4572-9784-f6f76da4d44f 2025-12-04T10:16:12.8871124Z GITHUB_EVENT_PATH=/home/runner/_work/_temp/_github_workflow/event.json 2025-12-04T10:16:12.8871741Z GITHUB_EVENT_NAME=schedule 2025-12-04T10:16:12.8872247Z GITHUB_ACTIONS_RUNNER_EXTRA_USER_AGENT=actions-runner-controller/0.12.1 2025-12-04T10:16:12.8872773Z DASHBOARD_TAG= 2025-12-04T10:16:12.8873079Z GITHUB_RUN_ID=19922849170 2025-12-04T10:16:12.8873747Z GITHUB_STEP_SUMMARY=/home/runner/_work/_temp/_runner_file_commands/step_summary_3095bd82-0065-4572-9784-f6f76da4d44f 2025-12-04T10:16:12.8874474Z GITHUB_ACTOR=pytorchmergebot 2025-12-04T10:16:12.8874827Z PR_NUMBER= 2025-12-04T10:16:12.8875121Z GITHUB_RUN_ATTEMPT=1 2025-12-04T10:16:12.8875437Z VALGRIND=ON 2025-12-04T10:16:12.8875742Z ANACONDA_PYTHON_VERSION=3.10 2025-12-04T10:16:12.8876178Z GITHUB_GRAPHQL_URL=https://api.github.com/graphql 2025-12-04T10:16:12.8876603Z TERM=vt100 2025-12-04T10:16:12.8876881Z INSTALLED_VISION=yes 2025-12-04T10:16:12.8877192Z BRANCH=main 2025-12-04T10:16:12.8877498Z OPENSSL_ROOT_DIR=/opt/openssl 2025-12-04T10:16:12.8877852Z TESTS_TO_INCLUDE= 2025-12-04T10:16:12.8878375Z GITHUB_ACTION_PATH=/home/runner/_work/pytorch/pytorch/./.github/actions/setup-rocm 2025-12-04T10:16:12.8878990Z GITHUB_SERVER_URL=https://github.com 2025-12-04T10:16:12.8879441Z PYTORCH_ROCM_ARCH=gfx90a;gfx942;gfx950;gfx1100 2025-12-04T10:16:12.8879925Z UCC_COMMIT=9f4b242cbbd8b1462cbc732eb29316cdfa124b77 2025-12-04T10:16:12.8880352Z REENABLED_ISSUES= 2025-12-04T10:16:12.8880889Z SHLVL=1 2025-12-04T10:16:12.8881166Z MAX_JOBS=126 2025-12-04T10:16:12.8881585Z RUNNER_TEST_RESULTS_DIR=/home/runner/_work/_temp/test-results 2025-12-04T10:16:12.8882073Z GITHUB_ACTOR_ID=97764156 2025-12-04T10:16:12.8882442Z RUNNER_TOOL_CACHE=/home/runner/_work/_tool 2025-12-04T10:16:12.8882956Z GITHUB_WORKFLOW_SHA=ffd9b0fb4355e97af82fc42cf185c3ffa0fc0a32 2025-12-04T10:16:12.8883431Z GITHUB_REF_NAME=main 2025-12-04T10:16:12.8883747Z ROCM_PATH=/opt/rocm 2025-12-04T10:16:12.8884051Z GITHUB_JOB=test 2025-12-04T10:16:12.8884353Z NO_TEST_TIMEOUT=False 2025-12-04T10:16:12.8884707Z GITHUB_REPOSITORY=pytorch/pytorch 2025-12-04T10:16:12.8885080Z LC_ALL=C.UTF-8 2025-12-04T10:16:12.8885383Z GITHUB_RETENTION_DAYS=90 2025-12-04T10:16:12.8885757Z RUNNER_WORKSPACE=/home/runner/_work/pytorch 2025-12-04T10:16:12.8886166Z OPENSSL_DIR=/opt/openssl 2025-12-04T10:16:12.8886511Z GITHUB_ACTION_REPOSITORY= 2025-12-04T10:16:12.8887761Z PATH=/opt/cache/bin:/opt/rocm/llvm/bin:/opt/rocm/opencl/bin:/opt/rocm/hip/bin:/opt/rocm/hcc/bin:/opt/rocm/bin:/opt/conda/envs/py_3.10/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2025-12-04T10:16:12.8888904Z GITHUB_BASE_REF= 2025-12-04T10:16:12.8889196Z CI=true 2025-12-04T10:16:12.8889495Z GITHUB_REPOSITORY_OWNER=pytorch 2025-12-04T10:16:12.8889853Z JOB_ID=57116213174 2025-12-04T10:16:12.8890152Z GITHUB_HEAD_REF= 2025-12-04T10:16:12.8890444Z GITHUB_ACTION_REF= 2025-12-04T10:16:12.8890796Z TEST_SHOWLOCALS=False 2025-12-04T10:16:12.8891147Z GITHUB_WORKFLOW=trunk-rocm-mi300 2025-12-04T10:16:12.8891533Z DEBIAN_FRONTEND=noninteractive 2025-12-04T10:16:12.8892191Z GITHUB_OUTPUT=/home/runner/_work/_temp/_runner_file_commands/set_output_3095bd82-0065-4572-9784-f6f76da4d44f 2025-12-04T10:16:12.8892857Z NO_TD=False 2025-12-04T10:16:12.8893151Z OLDPWD=/var/lib/jenkins 2025-12-04T10:16:12.8893469Z _=/usr/bin/env 2025-12-04T10:16:12.8893772Z + echo 'Testing pytorch' 2025-12-04T10:16:12.8894099Z Testing pytorch 2025-12-04T10:16:12.8894406Z + export LANG=C.UTF-8 2025-12-04T10:16:12.8894716Z + LANG=C.UTF-8 2025-12-04T10:16:12.8895006Z + PR_NUMBER= 2025-12-04T10:16:12.8895325Z + [[ distributed == \d\e\f\a\u\l\t ]] 2025-12-04T10:16:12.8895735Z + [[ distributed == \d\i\s\t\r\i\b\u\t\e\d ]] 2025-12-04T10:16:12.8896168Z + [[ linux-jammy-rocm-py3.10 == *rocm* ]] 2025-12-04T10:16:12.8896593Z + export HIP_VISIBLE_DEVICES=0,1,2,3 2025-12-04T10:16:12.8896985Z + HIP_VISIBLE_DEVICES=0,1,2,3 2025-12-04T10:16:12.8897355Z + [[ distributed == \s\l\o\w ]] 2025-12-04T10:16:12.8897782Z + [[ linux-jammy-rocm-py3.10 == *slow-gradcheck* ]] 2025-12-04T10:16:12.8898245Z + [[ linux-jammy-rocm-py3.10 == *cuda* ]] 2025-12-04T10:16:12.8898756Z + [[ linux-jammy-rocm-py3.10 == *rocm* ]] 2025-12-04T10:16:12.8899190Z + export PYTORCH_TESTING_DEVICE_ONLY_FOR=cuda 2025-12-04T10:16:12.8899620Z + PYTORCH_TESTING_DEVICE_ONLY_FOR=cuda 2025-12-04T10:16:12.8900056Z + [[ distributed == *crossref* ]] 2025-12-04T10:16:12.8900444Z + [[ linux-jammy-rocm-py3.10 == *rocm* ]] 2025-12-04T10:16:12.8900882Z + export VALGRIND=OFF 2025-12-04T10:16:12.8901196Z + VALGRIND=OFF 2025-12-04T10:16:12.8901479Z + rocminfo 2025-12-04T10:16:12.8976954Z ROCk module version 6.12.12 is loaded 2025-12-04T10:16:12.9701238Z ===================== 2025-12-04T10:16:12.9701739Z HSA System Attributes 2025-12-04T10:16:12.9702101Z ===================== 2025-12-04T10:16:12.9702442Z Runtime Version: 1.18 2025-12-04T10:16:12.9702806Z Runtime Ext Version: 1.14 2025-12-04T10:16:12.9703189Z System Timestamp Freq.: 1000.000000MHz 2025-12-04T10:16:12.9703828Z Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count) 2025-12-04T10:16:12.9704517Z Machine Model: LARGE 2025-12-04T10:16:12.9705056Z System Endianness: LITTLE 2025-12-04T10:16:12.9705518Z Mwaitx: DISABLED 2025-12-04T10:16:12.9705888Z XNACK enabled: NO 2025-12-04T10:16:12.9706252Z DMAbuf Support: YES 2025-12-04T10:16:12.9706606Z VMM Support: YES 2025-12-04T10:16:12.9706835Z 2025-12-04T10:16:12.9706997Z ========== 2025-12-04T10:16:12.9707337Z HSA Agents 2025-12-04T10:16:12.9707659Z ========== 2025-12-04T10:16:12.9707974Z ******* 2025-12-04T10:16:12.9708291Z Agent 1 2025-12-04T10:16:12.9708607Z ******* 2025-12-04T10:16:12.9709020Z Name: AMD EPYC 9575F 64-Core Processor 2025-12-04T10:16:12.9709513Z Uuid: CPU-XX 2025-12-04T10:16:12.9710023Z Marketing Name: AMD EPYC 9575F 64-Core Processor 2025-12-04T10:16:12.9710836Z Vendor Name: CPU 2025-12-04T10:16:12.9711347Z Feature: None specified 2025-12-04T10:16:12.9711836Z Profile: FULL_PROFILE 2025-12-04T10:16:12.9712352Z Float Round Mode: NEAR 2025-12-04T10:16:12.9712863Z Max Queue Number: 0(0x0) 2025-12-04T10:16:12.9713536Z Queue Min Size: 0(0x0) 2025-12-04T10:16:12.9714026Z Queue Max Size: 0(0x0) 2025-12-04T10:16:12.9714521Z Queue Type: MULTI 2025-12-04T10:16:12.9714987Z Node: 0 2025-12-04T10:16:12.9715460Z Device Type: CPU 2025-12-04T10:16:12.9715898Z Cache Info: 2025-12-04T10:16:12.9716289Z L1: 49152(0xc000) KB 2025-12-04T10:16:12.9716745Z Chip ID: 0(0x0) 2025-12-04T10:16:12.9717226Z ASIC Revision: 0(0x0) 2025-12-04T10:16:12.9717728Z Cacheline Size: 64(0x40) 2025-12-04T10:16:12.9718234Z Max Clock Freq. (MHz): 3300 2025-12-04T10:16:12.9718720Z BDFID: 0 2025-12-04T10:16:12.9719206Z Internal Node ID: 0 2025-12-04T10:16:12.9719721Z Compute Unit: 64 2025-12-04T10:16:12.9720213Z SIMDs per CU: 0 2025-12-04T10:16:12.9720850Z Shader Engines: 0 2025-12-04T10:16:12.9721454Z Shader Arrs. per Eng.: 0 2025-12-04T10:16:12.9721980Z WatchPts on Addr. Ranges:1 2025-12-04T10:16:12.9722617Z Memory Properties: 2025-12-04T10:16:12.9722994Z Features: None 2025-12-04T10:16:12.9723355Z Pool Info: 2025-12-04T10:16:12.9723701Z Pool 1 2025-12-04T10:16:12.9724152Z Segment: GLOBAL; FLAGS: FINE GRAINED 2025-12-04T10:16:12.9724714Z Size: 1584733356(0x5e751cac) KB 2025-12-04T10:16:12.9725221Z Allocatable: TRUE 2025-12-04T10:16:12.9725744Z Alloc Granule: 4KB 2025-12-04T10:16:12.9726282Z Alloc Recommended Granule:4KB 2025-12-04T10:16:12.9726818Z Alloc Alignment: 4KB 2025-12-04T10:16:12.9727358Z Accessible by all: TRUE 2025-12-04T10:16:12.9727806Z Pool 2 2025-12-04T10:16:12.9728253Z Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED 2025-12-04T10:16:12.9728744Z Size: 1584733356(0x5e751cac) KB 2025-12-04T10:16:12.9729226Z Allocatable: TRUE 2025-12-04T10:16:12.9729734Z Alloc Granule: 4KB 2025-12-04T10:16:12.9730350Z Alloc Recommended Granule:4KB 2025-12-04T10:16:12.9731203Z Alloc Alignment: 4KB 2025-12-04T10:16:12.9731728Z Accessible by all: TRUE 2025-12-04T10:16:12.9732207Z Pool 3 2025-12-04T10:16:12.9732717Z Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED 2025-12-04T10:16:12.9733351Z Size: 1584733356(0x5e751cac) KB 2025-12-04T10:16:12.9733849Z Allocatable: TRUE 2025-12-04T10:16:12.9734371Z Alloc Granule: 4KB 2025-12-04T10:16:12.9734913Z Alloc Recommended Granule:4KB 2025-12-04T10:16:12.9735475Z Alloc Alignment: 4KB 2025-12-04T10:16:12.9736016Z Accessible by all: TRUE 2025-12-04T10:16:12.9736467Z Pool 4 2025-12-04T10:16:12.9737066Z Segment: GLOBAL; FLAGS: COARSE GRAINED 2025-12-04T10:16:12.9737609Z Size: 1584733356(0x5e751cac) KB 2025-12-04T10:16:12.9738093Z Allocatable: TRUE 2025-12-04T10:16:12.9738603Z Alloc Granule: 4KB 2025-12-04T10:16:12.9739135Z Alloc Recommended Granule:4KB 2025-12-04T10:16:12.9739667Z Alloc Alignment: 4KB 2025-12-04T10:16:12.9740196Z Accessible by all: TRUE 2025-12-04T10:16:12.9740734Z ISA Info: 2025-12-04T10:16:12.9741071Z ******* 2025-12-04T10:16:12.9741403Z Agent 2 2025-12-04T10:16:12.9741785Z ******* 2025-12-04T10:16:12.9742167Z Name: AMD EPYC 9575F 64-Core Processor 2025-12-04T10:16:12.9742656Z Uuid: CPU-XX 2025-12-04T10:16:12.9743172Z Marketing Name: AMD EPYC 9575F 64-Core Processor 2025-12-04T10:16:12.9743800Z Vendor Name: CPU 2025-12-04T10:16:12.9744327Z Feature: None specified 2025-12-04T10:16:12.9744833Z Profile: FULL_PROFILE 2025-12-04T10:16:12.9745342Z Float Round Mode: NEAR 2025-12-04T10:16:12.9745851Z Max Queue Number: 0(0x0) 2025-12-04T10:16:12.9746633Z Queue Min Size: 0(0x0) 2025-12-04T10:16:12.9747126Z Queue Max Size: 0(0x0) 2025-12-04T10:16:12.9747620Z Queue Type: MULTI 2025-12-04T10:16:12.9748093Z Node: 1 2025-12-04T10:16:12.9748567Z Device Type: CPU 2025-12-04T10:16:12.9749013Z Cache Info: 2025-12-04T10:16:12.9749398Z L1: 49152(0xc000) KB 2025-12-04T10:16:12.9749853Z Chip ID: 0(0x0) 2025-12-04T10:16:12.9750340Z ASIC Revision: 0(0x0) 2025-12-04T10:16:12.9750960Z Cacheline Size: 64(0x40) 2025-12-04T10:16:12.9751476Z Max Clock Freq. (MHz): 3300 2025-12-04T10:16:12.9751964Z BDFID: 0 2025-12-04T10:16:12.9752453Z Internal Node ID: 1 2025-12-04T10:16:12.9752956Z Compute Unit: 64 2025-12-04T10:16:12.9753449Z SIMDs per CU: 0 2025-12-04T10:16:12.9753953Z Shader Engines: 0 2025-12-04T10:16:12.9754478Z Shader Arrs. per Eng.: 0 2025-12-04T10:16:12.9755011Z WatchPts on Addr. Ranges:1 2025-12-04T10:16:12.9755476Z Memory Properties: 2025-12-04T10:16:12.9755835Z Features: None 2025-12-04T10:16:12.9756195Z Pool Info: 2025-12-04T10:16:12.9756560Z Pool 1 2025-12-04T10:16:12.9756986Z Segment: GLOBAL; FLAGS: FINE GRAINED 2025-12-04T10:16:12.9757505Z Size: 1585355580(0x5e7e9b3c) KB 2025-12-04T10:16:12.9758006Z Allocatable: TRUE 2025-12-04T10:16:12.9758524Z Alloc Granule: 4KB 2025-12-04T10:16:12.9759068Z Alloc Recommended Granule:4KB 2025-12-04T10:16:12.9759606Z Alloc Alignment: 4KB 2025-12-04T10:16:12.9760136Z Accessible by all: TRUE 2025-12-04T10:16:12.9760774Z Pool 2 2025-12-04T10:16:12.9761208Z Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED 2025-12-04T10:16:12.9761700Z Size: 1585355580(0x5e7e9b3c) KB 2025-12-04T10:16:12.9762187Z Allocatable: TRUE 2025-12-04T10:16:12.9762701Z Alloc Granule: 4KB 2025-12-04T10:16:12.9763244Z Alloc Recommended Granule:4KB 2025-12-04T10:16:12.9763791Z Alloc Alignment: 4KB 2025-12-04T10:16:12.9764316Z Accessible by all: TRUE 2025-12-04T10:16:12.9764770Z Pool 3 2025-12-04T10:16:12.9765198Z Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED 2025-12-04T10:16:12.9765691Z Size: 1585355580(0x5e7e9b3c) KB 2025-12-04T10:16:12.9766186Z Allocatable: TRUE 2025-12-04T10:16:12.9766702Z Alloc Granule: 4KB 2025-12-04T10:16:12.9767242Z Alloc Recommended Granule:4KB 2025-12-04T10:16:12.9767779Z Alloc Alignment: 4KB 2025-12-04T10:16:12.9768303Z Accessible by all: TRUE 2025-12-04T10:16:12.9768758Z Pool 4 2025-12-04T10:16:12.9769293Z Segment: GLOBAL; FLAGS: COARSE GRAINED 2025-12-04T10:16:12.9769782Z Size: 1585355580(0x5e7e9b3c) KB 2025-12-04T10:16:12.9770267Z Allocatable: TRUE 2025-12-04T10:16:12.9770846Z Alloc Granule: 4KB 2025-12-04T10:16:12.9771384Z Alloc Recommended Granule:4KB 2025-12-04T10:16:12.9771929Z Alloc Alignment: 4KB 2025-12-04T10:16:12.9772457Z Accessible by all: TRUE 2025-12-04T10:16:12.9772917Z ISA Info: 2025-12-04T10:16:12.9773255Z ******* 2025-12-04T10:16:12.9773586Z Agent 3 2025-12-04T10:16:12.9773911Z ******* 2025-12-04T10:16:12.9774284Z Name: gfx942 2025-12-04T10:16:12.9774759Z Uuid: GPU-0786bf8e0c323cdf 2025-12-04T10:16:12.9775267Z Marketing Name: 2025-12-04T10:16:12.9775778Z Vendor Name: AMD 2025-12-04T10:16:12.9776278Z Feature: KERNEL_DISPATCH 2025-12-04T10:16:12.9776782Z Profile: BASE_PROFILE 2025-12-04T10:16:12.9777290Z Float Round Mode: NEAR 2025-12-04T10:16:12.9777809Z Max Queue Number: 128(0x80) 2025-12-04T10:16:12.9778320Z Queue Min Size: 64(0x40) 2025-12-04T10:16:12.9778818Z Queue Max Size: 131072(0x20000) 2025-12-04T10:16:12.9779319Z Queue Type: MULTI 2025-12-04T10:16:12.9779788Z Node: 2 2025-12-04T10:16:12.9780263Z Device Type: GPU 2025-12-04T10:16:12.9780763Z Cache Info: 2025-12-04T10:16:12.9781142Z L1: 32(0x20) KB 2025-12-04T10:16:12.9781579Z L2: 4096(0x1000) KB 2025-12-04T10:16:12.9782011Z L3: 262144(0x40000) KB 2025-12-04T10:16:12.9782458Z Chip ID: 29861(0x74a5) 2025-12-04T10:16:12.9783047Z ASIC Revision: 1(0x1) 2025-12-04T10:16:12.9783558Z Cacheline Size: 128(0x80) 2025-12-04T10:16:12.9784072Z Max Clock Freq. (MHz): 2100 2025-12-04T10:16:12.9784556Z BDFID: 29952 2025-12-04T10:16:12.9785044Z Internal Node ID: 2 2025-12-04T10:16:12.9785550Z Compute Unit: 304 2025-12-04T10:16:12.9786050Z SIMDs per CU: 4 2025-12-04T10:16:12.9786550Z Shader Engines: 32 2025-12-04T10:16:12.9787068Z Shader Arrs. per Eng.: 1 2025-12-04T10:16:12.9787595Z WatchPts on Addr. Ranges:4 2025-12-04T10:16:12.9788131Z Coherent Host Access: FALSE 2025-12-04T10:16:12.9788600Z Memory Properties: 2025-12-04T10:16:12.9789000Z Features: KERNEL_DISPATCH 2025-12-04T10:16:12.9789481Z Fast F16 Operation: TRUE 2025-12-04T10:16:12.9790005Z Wavefront Size: 64(0x40) 2025-12-04T10:16:12.9790529Z Workgroup Max Size: 1024(0x400) 2025-12-04T10:16:12.9791066Z Workgroup Max Size per Dimension: 2025-12-04T10:16:12.9791491Z x 1024(0x400) 2025-12-04T10:16:12.9792025Z y 1024(0x400) 2025-12-04T10:16:12.9792448Z z 1024(0x400) 2025-12-04T10:16:12.9792914Z Max Waves Per CU: 32(0x20) 2025-12-04T10:16:12.9793433Z Max Work-item Per CU: 2048(0x800) 2025-12-04T10:16:12.9793944Z Grid Max Size: 4294967295(0xffffffff) 2025-12-04T10:16:12.9794405Z Grid Max Size per Dimension: 2025-12-04T10:16:12.9794791Z x 2147483647(0x7fffffff) 2025-12-04T10:16:12.9795218Z y 65535(0xffff) 2025-12-04T10:16:12.9795642Z z 65535(0xffff) 2025-12-04T10:16:12.9796132Z Max fbarriers/Workgrp: 32 2025-12-04T10:16:12.9796748Z Packet Processor uCode:: 185 2025-12-04T10:16:12.9797284Z SDMA engine uCode:: 24 2025-12-04T10:16:12.9797811Z IOMMU Support:: None 2025-12-04T10:16:12.9824547Z Pool Info: 2025-12-04T10:16:12.9824938Z Pool 1 2025-12-04T10:16:12.9825401Z Segment: GLOBAL; FLAGS: COARSE GRAINED 2025-12-04T10:16:12.9825928Z Size: 268419072(0xfffc000) KB 2025-12-04T10:16:12.9826457Z Allocatable: TRUE 2025-12-04T10:16:12.9826981Z Alloc Granule: 4KB 2025-12-04T10:16:12.9827533Z Alloc Recommended Granule:2048KB 2025-12-04T10:16:12.9828079Z Alloc Alignment: 4KB 2025-12-04T10:16:12.9828608Z Accessible by all: FALSE 2025-12-04T10:16:12.9829062Z Pool 2 2025-12-04T10:16:12.9829498Z Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED 2025-12-04T10:16:12.9830009Z Size: 268419072(0xfffc000) KB 2025-12-04T10:16:12.9830497Z Allocatable: TRUE 2025-12-04T10:16:12.9831088Z Alloc Granule: 4KB 2025-12-04T10:16:12.9831632Z Alloc Recommended Granule:2048KB 2025-12-04T10:16:12.9832389Z Alloc Alignment: 4KB 2025-12-04T10:16:12.9832920Z Accessible by all: FALSE 2025-12-04T10:16:12.9833375Z Pool 3 2025-12-04T10:16:12.9833805Z Segment: GLOBAL; FLAGS: FINE GRAINED 2025-12-04T10:16:12.9834299Z Size: 268419072(0xfffc000) KB 2025-12-04T10:16:12.9834782Z Allocatable: TRUE 2025-12-04T10:16:12.9835291Z Alloc Granule: 4KB 2025-12-04T10:16:12.9835839Z Alloc Recommended Granule:2048KB 2025-12-04T10:16:12.9836377Z Alloc Alignment: 4KB 2025-12-04T10:16:12.9836899Z Accessible by all: FALSE 2025-12-04T10:16:12.9837347Z Pool 4 2025-12-04T10:16:12.9837753Z Segment: GROUP 2025-12-04T10:16:12.9838230Z Size: 64(0x40) KB 2025-12-04T10:16:12.9838709Z Allocatable: FALSE 2025-12-04T10:16:12.9839219Z Alloc Granule: 0KB 2025-12-04T10:16:12.9839758Z Alloc Recommended Granule:0KB 2025-12-04T10:16:12.9840303Z Alloc Alignment: 0KB 2025-12-04T10:16:12.9840894Z Accessible by all: FALSE 2025-12-04T10:16:12.9841485Z ISA Info: 2025-12-04T10:16:12.9841832Z ISA 1 2025-12-04T10:16:12.9842278Z Name: amdgcn-amd-amdhsa--gfx942:sramecc+:xnack- 2025-12-04T10:16:12.9842834Z Machine Models: HSA_MACHINE_MODEL_LARGE 2025-12-04T10:16:12.9843378Z Profiles: HSA_PROFILE_BASE 2025-12-04T10:16:12.9843921Z Default Rounding Mode: NEAR 2025-12-04T10:16:12.9844474Z Default Rounding Mode: NEAR 2025-12-04T10:16:12.9844989Z Fast f16: TRUE 2025-12-04T10:16:12.9845505Z Workgroup Max Size: 1024(0x400) 2025-12-04T10:16:12.9845999Z Workgroup Max Size per Dimension: 2025-12-04T10:16:12.9846453Z x 1024(0x400) 2025-12-04T10:16:12.9846906Z y 1024(0x400) 2025-12-04T10:16:12.9847339Z z 1024(0x400) 2025-12-04T10:16:12.9847818Z Grid Max Size: 4294967295(0xffffffff) 2025-12-04T10:16:12.9848287Z Grid Max Size per Dimension: 2025-12-04T10:16:12.9848701Z x 2147483647(0x7fffffff) 2025-12-04T10:16:12.9849149Z y 65535(0xffff) 2025-12-04T10:16:12.9849585Z z 65535(0xffff) 2025-12-04T10:16:12.9850072Z FBarrier Max Size: 32 2025-12-04T10:16:12.9850533Z ISA 2 2025-12-04T10:16:12.9851072Z Name: amdgcn-amd-amdhsa--gfx9-4-generic:sramecc+:xnack- 2025-12-04T10:16:12.9851664Z Machine Models: HSA_MACHINE_MODEL_LARGE 2025-12-04T10:16:12.9852222Z Profiles: HSA_PROFILE_BASE 2025-12-04T10:16:12.9852762Z Default Rounding Mode: NEAR 2025-12-04T10:16:12.9853312Z Default Rounding Mode: NEAR 2025-12-04T10:16:12.9853826Z Fast f16: TRUE 2025-12-04T10:16:12.9854330Z Workgroup Max Size: 1024(0x400) 2025-12-04T10:16:12.9854926Z Workgroup Max Size per Dimension: 2025-12-04T10:16:12.9855364Z x 1024(0x400) 2025-12-04T10:16:12.9855800Z y 1024(0x400) 2025-12-04T10:16:12.9856231Z z 1024(0x400) 2025-12-04T10:16:12.9856703Z Grid Max Size: 4294967295(0xffffffff) 2025-12-04T10:16:12.9857168Z Grid Max Size per Dimension: 2025-12-04T10:16:12.9857578Z x 2147483647(0x7fffffff) 2025-12-04T10:16:12.9858029Z y 65535(0xffff) 2025-12-04T10:16:12.9858462Z z 65535(0xffff) 2025-12-04T10:16:12.9858945Z FBarrier Max Size: 32 2025-12-04T10:16:12.9859400Z ******* 2025-12-04T10:16:12.9859747Z Agent 4 2025-12-04T10:16:12.9860085Z ******* 2025-12-04T10:16:12.9860487Z Name: gfx942 2025-12-04T10:16:12.9861037Z Uuid: GPU-f1277e79873f2863 2025-12-04T10:16:12.9861551Z Marketing Name: 2025-12-04T10:16:12.9862070Z Vendor Name: AMD 2025-12-04T10:16:12.9862586Z Feature: KERNEL_DISPATCH 2025-12-04T10:16:12.9863099Z Profile: BASE_PROFILE 2025-12-04T10:16:12.9863709Z Float Round Mode: NEAR 2025-12-04T10:16:12.9864233Z Max Queue Number: 128(0x80) 2025-12-04T10:16:12.9864747Z Queue Min Size: 64(0x40) 2025-12-04T10:16:12.9865254Z Queue Max Size: 131072(0x20000) 2025-12-04T10:16:12.9865760Z Queue Type: MULTI 2025-12-04T10:16:12.9866246Z Node: 3 2025-12-04T10:16:12.9866734Z Device Type: GPU 2025-12-04T10:16:12.9867194Z Cache Info: 2025-12-04T10:16:12.9867597Z L1: 32(0x20) KB 2025-12-04T10:16:12.9868049Z L2: 4096(0x1000) KB 2025-12-04T10:16:12.9868489Z L3: 262144(0x40000) KB 2025-12-04T10:16:12.9868962Z Chip ID: 29861(0x74a5) 2025-12-04T10:16:12.9869459Z ASIC Revision: 1(0x1) 2025-12-04T10:16:12.9869974Z Cacheline Size: 128(0x80) 2025-12-04T10:16:12.9870496Z Max Clock Freq. (MHz): 2100 2025-12-04T10:16:12.9871044Z BDFID: 1280 2025-12-04T10:16:12.9871552Z Internal Node ID: 3 2025-12-04T10:16:12.9872066Z Compute Unit: 304 2025-12-04T10:16:12.9872568Z SIMDs per CU: 4 2025-12-04T10:16:12.9873080Z Shader Engines: 32 2025-12-04T10:16:12.9873610Z Shader Arrs. per Eng.: 1 2025-12-04T10:16:12.9874154Z WatchPts on Addr. Ranges:4 2025-12-04T10:16:12.9874708Z Coherent Host Access: FALSE 2025-12-04T10:16:12.9875189Z Memory Properties: 2025-12-04T10:16:12.9875591Z Features: KERNEL_DISPATCH 2025-12-04T10:16:12.9876077Z Fast F16 Operation: TRUE 2025-12-04T10:16:12.9876607Z Wavefront Size: 64(0x40) 2025-12-04T10:16:12.9877138Z Workgroup Max Size: 1024(0x400) 2025-12-04T10:16:12.9877731Z Workgroup Max Size per Dimension: 2025-12-04T10:16:12.9878163Z x 1024(0x400) 2025-12-04T10:16:12.9878606Z y 1024(0x400) 2025-12-04T10:16:12.9879035Z z 1024(0x400) 2025-12-04T10:16:12.9879507Z Max Waves Per CU: 32(0x20) 2025-12-04T10:16:12.9880033Z Max Work-item Per CU: 2048(0x800) 2025-12-04T10:16:12.9880561Z Grid Max Size: 4294967295(0xffffffff) 2025-12-04T10:16:12.9881079Z Grid Max Size per Dimension: 2025-12-04T10:16:12.9881479Z x 2147483647(0x7fffffff) 2025-12-04T10:16:12.9881914Z y 65535(0xffff) 2025-12-04T10:16:12.9882350Z z 65535(0xffff) 2025-12-04T10:16:12.9882854Z Max fbarriers/Workgrp: 32 2025-12-04T10:16:12.9883428Z Packet Processor uCode:: 185 2025-12-04T10:16:12.9883971Z SDMA engine uCode:: 24 2025-12-04T10:16:12.9884499Z IOMMU Support:: None 2025-12-04T10:16:12.9884962Z Pool Info: 2025-12-04T10:16:12.9885323Z Pool 1 2025-12-04T10:16:12.9885777Z Segment: GLOBAL; FLAGS: COARSE GRAINED 2025-12-04T10:16:12.9886398Z Size: 268419072(0xfffc000) KB 2025-12-04T10:16:12.9886910Z Allocatable: TRUE 2025-12-04T10:16:12.9887439Z Alloc Granule: 4KB 2025-12-04T10:16:12.9887990Z Alloc Recommended Granule:2048KB 2025-12-04T10:16:12.9888540Z Alloc Alignment: 4KB 2025-12-04T10:16:12.9889084Z Accessible by all: FALSE 2025-12-04T10:16:12.9889549Z Pool 2 2025-12-04T10:16:12.9889995Z Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED 2025-12-04T10:16:12.9890502Z Size: 268419072(0xfffc000) KB 2025-12-04T10:16:12.9891072Z Allocatable: TRUE 2025-12-04T10:16:12.9891594Z Alloc Granule: 4KB 2025-12-04T10:16:12.9892152Z Alloc Recommended Granule:2048KB 2025-12-04T10:16:12.9892704Z Alloc Alignment: 4KB 2025-12-04T10:16:12.9893239Z Accessible by all: FALSE 2025-12-04T10:16:12.9893697Z Pool 3 2025-12-04T10:16:12.9894135Z Segment: GLOBAL; FLAGS: FINE GRAINED 2025-12-04T10:16:12.9894631Z Size: 268419072(0xfffc000) KB 2025-12-04T10:16:12.9895116Z Allocatable: TRUE 2025-12-04T10:16:12.9895631Z Alloc Granule: 4KB 2025-12-04T10:16:12.9896171Z Alloc Recommended Granule:2048KB 2025-12-04T10:16:12.9896713Z Alloc Alignment: 4KB 2025-12-04T10:16:12.9897237Z Accessible by all: FALSE 2025-12-04T10:16:12.9897697Z Pool 4 2025-12-04T10:16:12.9898108Z Segment: GROUP 2025-12-04T10:16:12.9898580Z Size: 64(0x40) KB 2025-12-04T10:16:12.9899063Z Allocatable: FALSE 2025-12-04T10:16:12.9899577Z Alloc Granule: 0KB 2025-12-04T10:16:12.9900229Z Alloc Recommended Granule:0KB 2025-12-04T10:16:12.9900829Z Alloc Alignment: 0KB 2025-12-04T10:16:12.9901360Z Accessible by all: FALSE 2025-12-04T10:16:12.9901818Z ISA Info: 2025-12-04T10:16:12.9902167Z ISA 1 2025-12-04T10:16:12.9902606Z Name: amdgcn-amd-amdhsa--gfx942:sramecc+:xnack- 2025-12-04T10:16:12.9903156Z Machine Models: HSA_MACHINE_MODEL_LARGE 2025-12-04T10:16:12.9903696Z Profiles: HSA_PROFILE_BASE 2025-12-04T10:16:12.9904228Z Default Rounding Mode: NEAR 2025-12-04T10:16:12.9904770Z Default Rounding Mode: NEAR 2025-12-04T10:16:12.9905277Z Fast f16: TRUE 2025-12-04T10:16:12.9905783Z Workgroup Max Size: 1024(0x400) 2025-12-04T10:16:12.9906268Z Workgroup Max Size per Dimension: 2025-12-04T10:16:12.9906706Z x 1024(0x400) 2025-12-04T10:16:12.9907140Z y 1024(0x400) 2025-12-04T10:16:12.9907567Z z 1024(0x400) 2025-12-04T10:16:12.9908036Z Grid Max Size: 4294967295(0xffffffff) 2025-12-04T10:16:12.9908498Z Grid Max Size per Dimension: 2025-12-04T10:16:12.9909004Z x 2147483647(0x7fffffff) 2025-12-04T10:16:12.9909432Z y 65535(0xffff) 2025-12-04T10:16:12.9909859Z z 65535(0xffff) 2025-12-04T10:16:12.9910335Z FBarrier Max Size: 32 2025-12-04T10:16:12.9910835Z ISA 2 2025-12-04T10:16:12.9911304Z Name: amdgcn-amd-amdhsa--gfx9-4-generic:sramecc+:xnack- 2025-12-04T10:16:12.9911883Z Machine Models: HSA_MACHINE_MODEL_LARGE 2025-12-04T10:16:12.9912424Z Profiles: HSA_PROFILE_BASE 2025-12-04T10:16:12.9912957Z Default Rounding Mode: NEAR 2025-12-04T10:16:12.9913501Z Default Rounding Mode: NEAR 2025-12-04T10:16:12.9914007Z Fast f16: TRUE 2025-12-04T10:16:12.9914515Z Workgroup Max Size: 1024(0x400) 2025-12-04T10:16:12.9914991Z Workgroup Max Size per Dimension: 2025-12-04T10:16:12.9915416Z x 1024(0x400) 2025-12-04T10:16:12.9915842Z y 1024(0x400) 2025-12-04T10:16:12.9916269Z z 1024(0x400) 2025-12-04T10:16:12.9916741Z Grid Max Size: 4294967295(0xffffffff) 2025-12-04T10:16:12.9917202Z Grid Max Size per Dimension: 2025-12-04T10:16:12.9917599Z x 2147483647(0x7fffffff) 2025-12-04T10:16:12.9918030Z y 65535(0xffff) 2025-12-04T10:16:12.9918454Z z 65535(0xffff) 2025-12-04T10:16:12.9918932Z FBarrier Max Size: 32 2025-12-04T10:16:12.9919383Z ******* 2025-12-04T10:16:12.9919719Z Agent 5 2025-12-04T10:16:12.9920046Z ******* 2025-12-04T10:16:12.9920422Z Name: gfx942 2025-12-04T10:16:12.9920954Z Uuid: GPU-a60c6760ff6d4bed 2025-12-04T10:16:12.9921457Z Marketing Name: 2025-12-04T10:16:12.9921966Z Vendor Name: AMD 2025-12-04T10:16:12.9922579Z Feature: KERNEL_DISPATCH 2025-12-04T10:16:12.9923086Z Profile: BASE_PROFILE 2025-12-04T10:16:12.9923592Z Float Round Mode: NEAR 2025-12-04T10:16:12.9924104Z Max Queue Number: 128(0x80) 2025-12-04T10:16:12.9924604Z Queue Min Size: 64(0x40) 2025-12-04T10:16:12.9925099Z Queue Max Size: 131072(0x20000) 2025-12-04T10:16:12.9925599Z Queue Type: MULTI 2025-12-04T10:16:12.9926070Z Node: 4 2025-12-04T10:16:12.9926540Z Device Type: GPU 2025-12-04T10:16:12.9926977Z Cache Info: 2025-12-04T10:16:12.9927362Z L1: 32(0x20) KB 2025-12-04T10:16:12.9927805Z L2: 4096(0x1000) KB 2025-12-04T10:16:12.9928230Z L3: 262144(0x40000) KB 2025-12-04T10:16:12.9928677Z Chip ID: 29861(0x74a5) 2025-12-04T10:16:12.9929159Z ASIC Revision: 1(0x1) 2025-12-04T10:16:12.9929661Z Cacheline Size: 128(0x80) 2025-12-04T10:16:12.9930166Z Max Clock Freq. (MHz): 2100 2025-12-04T10:16:12.9930817Z BDFID: 25856 2025-12-04T10:16:12.9931303Z Internal Node ID: 4 2025-12-04T10:16:12.9931805Z Compute Unit: 304 2025-12-04T10:16:12.9932295Z SIMDs per CU: 4 2025-12-04T10:16:12.9932793Z Shader Engines: 32 2025-12-04T10:16:12.9933313Z Shader Arrs. per Eng.: 1 2025-12-04T10:16:12.9933841Z WatchPts on Addr. Ranges:4 2025-12-04T10:16:12.9934371Z Coherent Host Access: FALSE 2025-12-04T10:16:12.9934836Z Memory Properties: 2025-12-04T10:16:12.9935221Z Features: KERNEL_DISPATCH 2025-12-04T10:16:12.9935689Z Fast F16 Operation: TRUE 2025-12-04T10:16:12.9936207Z Wavefront Size: 64(0x40) 2025-12-04T10:16:12.9936732Z Workgroup Max Size: 1024(0x400) 2025-12-04T10:16:12.9937203Z Workgroup Max Size per Dimension: 2025-12-04T10:16:12.9937614Z x 1024(0x400) 2025-12-04T10:16:12.9938037Z y 1024(0x400) 2025-12-04T10:16:12.9938451Z z 1024(0x400) 2025-12-04T10:16:12.9938914Z Max Waves Per CU: 32(0x20) 2025-12-04T10:16:12.9939425Z Max Work-item Per CU: 2048(0x800) 2025-12-04T10:16:12.9939932Z Grid Max Size: 4294967295(0xffffffff) 2025-12-04T10:16:12.9940385Z Grid Max Size per Dimension: 2025-12-04T10:16:12.9940817Z x 2147483647(0x7fffffff) 2025-12-04T10:16:12.9941243Z y 65535(0xffff) 2025-12-04T10:16:12.9941661Z z 65535(0xffff) 2025-12-04T10:16:12.9942149Z Max fbarriers/Workgrp: 32 2025-12-04T10:16:12.9942697Z Packet Processor uCode:: 185 2025-12-04T10:16:12.9943227Z SDMA engine uCode:: 24 2025-12-04T10:16:12.9943737Z IOMMU Support:: None 2025-12-04T10:16:12.9944185Z Pool Info: 2025-12-04T10:16:12.9944720Z Pool 1 2025-12-04T10:16:12.9945155Z Segment: GLOBAL; FLAGS: COARSE GRAINED 2025-12-04T10:16:12.9945656Z Size: 268419072(0xfffc000) KB 2025-12-04T10:16:12.9946152Z Allocatable: TRUE 2025-12-04T10:16:12.9946666Z Alloc Granule: 4KB 2025-12-04T10:16:12.9947207Z Alloc Recommended Granule:2048KB 2025-12-04T10:16:12.9947752Z Alloc Alignment: 4KB 2025-12-04T10:16:12.9948276Z Accessible by all: FALSE 2025-12-04T10:16:12.9948726Z Pool 2 2025-12-04T10:16:12.9949156Z Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED 2025-12-04T10:16:12.9949647Z Size: 268419072(0xfffc000) KB 2025-12-04T10:16:12.9950135Z Allocatable: TRUE 2025-12-04T10:16:12.9950704Z Alloc Granule: 4KB 2025-12-04T10:16:12.9951238Z Alloc Recommended Granule:2048KB 2025-12-04T10:16:12.9951770Z Alloc Alignment: 4KB 2025-12-04T10:16:12.9952290Z Accessible by all: FALSE 2025-12-04T10:16:12.9952738Z Pool 3 2025-12-04T10:16:12.9953158Z Segment: GLOBAL; FLAGS: FINE GRAINED 2025-12-04T10:16:12.9954063Z Size: 268419072(0xfffc000) KB 2025-12-04T10:16:12.9954545Z Allocatable: TRUE 2025-12-04T10:16:12.9955057Z Alloc Granule: 4KB 2025-12-04T10:16:12.9955592Z Alloc Recommended Granule:2048KB 2025-12-04T10:16:12.9956132Z Alloc Alignment: 4KB 2025-12-04T10:16:12.9956652Z Accessible by all: FALSE 2025-12-04T10:16:12.9957103Z Pool 4 2025-12-04T10:16:12.9957510Z Segment: GROUP 2025-12-04T10:16:12.9957979Z Size: 64(0x40) KB 2025-12-04T10:16:12.9958459Z Allocatable: FALSE 2025-12-04T10:16:12.9958969Z Alloc Granule: 0KB 2025-12-04T10:16:12.9959513Z Alloc Recommended Granule:0KB 2025-12-04T10:16:12.9960048Z Alloc Alignment: 0KB 2025-12-04T10:16:12.9960570Z Accessible by all: FALSE 2025-12-04T10:16:12.9961081Z ISA Info: 2025-12-04T10:16:12.9961427Z ISA 1 2025-12-04T10:16:12.9961862Z Name: amdgcn-amd-amdhsa--gfx942:sramecc+:xnack- 2025-12-04T10:16:12.9962403Z Machine Models: HSA_MACHINE_MODEL_LARGE 2025-12-04T10:16:12.9962935Z Profiles: HSA_PROFILE_BASE 2025-12-04T10:16:12.9963464Z Default Rounding Mode: NEAR 2025-12-04T10:16:12.9964008Z Default Rounding Mode: NEAR 2025-12-04T10:16:12.9964513Z Fast f16: TRUE 2025-12-04T10:16:12.9965022Z Workgroup Max Size: 1024(0x400) 2025-12-04T10:16:12.9965500Z Workgroup Max Size per Dimension: 2025-12-04T10:16:12.9965925Z x 1024(0x400) 2025-12-04T10:16:12.9966354Z y 1024(0x400) 2025-12-04T10:16:12.9966777Z z 1024(0x400) 2025-12-04T10:16:12.9967352Z Grid Max Size: 4294967295(0xffffffff) 2025-12-04T10:16:12.9967811Z Grid Max Size per Dimension: 2025-12-04T10:16:12.9968209Z x 2147483647(0x7fffffff) 2025-12-04T10:16:12.9968639Z y 65535(0xffff) 2025-12-04T10:16:12.9969060Z z 65535(0xffff) 2025-12-04T10:16:12.9969535Z FBarrier Max Size: 32 2025-12-04T10:16:12.9969986Z ISA 2 2025-12-04T10:16:12.9970444Z Name: amdgcn-amd-amdhsa--gfx9-4-generic:sramecc+:xnack- 2025-12-04T10:16:12.9971071Z Machine Models: HSA_MACHINE_MODEL_LARGE 2025-12-04T10:16:12.9971602Z Profiles: HSA_PROFILE_BASE 2025-12-04T10:16:12.9972127Z Default Rounding Mode: NEAR 2025-12-04T10:16:12.9972675Z Default Rounding Mode: NEAR 2025-12-04T10:16:12.9973176Z Fast f16: TRUE 2025-12-04T10:16:12.9973673Z Workgroup Max Size: 1024(0x400) 2025-12-04T10:16:12.9974147Z Workgroup Max Size per Dimension: 2025-12-04T10:16:12.9974566Z x 1024(0x400) 2025-12-04T10:16:12.9974990Z y 1024(0x400) 2025-12-04T10:16:12.9975503Z z 1024(0x400) 2025-12-04T10:16:12.9975964Z Grid Max Size: 4294967295(0xffffffff) 2025-12-04T10:16:12.9976422Z Grid Max Size per Dimension: 2025-12-04T10:16:12.9976818Z x 2147483647(0x7fffffff) 2025-12-04T10:16:12.9977245Z y 65535(0xffff) 2025-12-04T10:16:12.9977675Z z 65535(0xffff) 2025-12-04T10:16:12.9978153Z FBarrier Max Size: 32 2025-12-04T10:16:12.9978595Z ******* 2025-12-04T10:16:12.9978921Z Agent 6 2025-12-04T10:16:12.9979244Z ******* 2025-12-04T10:16:12.9979621Z Name: gfx942 2025-12-04T10:16:12.9980095Z Uuid: GPU-0c7715a1f9faf149 2025-12-04T10:16:12.9980592Z Marketing Name: 2025-12-04T10:16:12.9981188Z Vendor Name: AMD 2025-12-04T10:16:12.9981690Z Feature: KERNEL_DISPATCH 2025-12-04T10:16:12.9982190Z Profile: BASE_PROFILE 2025-12-04T10:16:12.9982697Z Float Round Mode: NEAR 2025-12-04T10:16:12.9983209Z Max Queue Number: 128(0x80) 2025-12-04T10:16:12.9983719Z Queue Min Size: 64(0x40) 2025-12-04T10:16:12.9984214Z Queue Max Size: 131072(0x20000) 2025-12-04T10:16:12.9984708Z Queue Type: MULTI 2025-12-04T10:16:12.9985178Z Node: 5 2025-12-04T10:16:12.9985649Z Device Type: GPU 2025-12-04T10:16:12.9986092Z Cache Info: 2025-12-04T10:16:12.9986475Z L1: 32(0x20) KB 2025-12-04T10:16:12.9986909Z L2: 4096(0x1000) KB 2025-12-04T10:16:12.9987334Z L3: 262144(0x40000) KB 2025-12-04T10:16:12.9987780Z Chip ID: 29861(0x74a5) 2025-12-04T10:16:12.9988264Z ASIC Revision: 1(0x1) 2025-12-04T10:16:12.9988875Z Cacheline Size: 128(0x80) 2025-12-04T10:16:12.9989386Z Max Clock Freq. (MHz): 2100 2025-12-04T10:16:12.9989864Z BDFID: 5376 2025-12-04T10:16:12.9990346Z Internal Node ID: 5 2025-12-04T10:16:12.9990907Z Compute Unit: 304 2025-12-04T10:16:12.9991406Z SIMDs per CU: 4 2025-12-04T10:16:12.9991917Z Shader Engines: 32 2025-12-04T10:16:12.9992444Z Shader Arrs. per Eng.: 1 2025-12-04T10:16:12.9992975Z WatchPts on Addr. Ranges:4 2025-12-04T10:16:12.9993514Z Coherent Host Access: FALSE 2025-12-04T10:16:12.9993989Z Memory Properties: 2025-12-04T10:16:12.9994382Z Features: KERNEL_DISPATCH 2025-12-04T10:16:12.9994866Z Fast F16 Operation: TRUE 2025-12-04T10:16:12.9995390Z Wavefront Size: 64(0x40) 2025-12-04T10:16:12.9995913Z Workgroup Max Size: 1024(0x400) 2025-12-04T10:16:12.9996393Z Workgroup Max Size per Dimension: 2025-12-04T10:16:12.9996809Z x 1024(0x400) 2025-12-04T10:16:12.9997239Z y 1024(0x400) 2025-12-04T10:16:12.9997755Z z 1024(0x400) 2025-12-04T10:16:12.9998223Z Max Waves Per CU: 32(0x20) 2025-12-04T10:16:12.9998746Z Max Work-item Per CU: 2048(0x800) 2025-12-04T10:16:12.9999262Z Grid Max Size: 4294967295(0xffffffff) 2025-12-04T10:16:12.9999721Z Grid Max Size per Dimension: 2025-12-04T10:16:13.0000123Z x 2147483647(0x7fffffff) 2025-12-04T10:16:13.0000555Z y 65535(0xffff) 2025-12-04T10:16:13.0001028Z z 65535(0xffff) 2025-12-04T10:16:13.0001518Z Max fbarriers/Workgrp: 32 2025-12-04T10:16:13.0002076Z Packet Processor uCode:: 185 2025-12-04T10:16:13.0002614Z SDMA engine uCode:: 24 2025-12-04T10:16:13.0003139Z IOMMU Support:: None 2025-12-04T10:16:13.0003606Z Pool Info: 2025-12-04T10:16:13.0003961Z Pool 1 2025-12-04T10:16:13.0004396Z Segment: GLOBAL; FLAGS: COARSE GRAINED 2025-12-04T10:16:13.0004899Z Size: 268419072(0xfffc000) KB 2025-12-04T10:16:13.0005394Z Allocatable: TRUE 2025-12-04T10:16:13.0005915Z Alloc Granule: 4KB 2025-12-04T10:16:13.0006457Z Alloc Recommended Granule:2048KB 2025-12-04T10:16:13.0006995Z Alloc Alignment: 4KB 2025-12-04T10:16:13.0007523Z Accessible by all: FALSE 2025-12-04T10:16:13.0007972Z Pool 2 2025-12-04T10:16:13.0008406Z Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED 2025-12-04T10:16:13.0008903Z Size: 268419072(0xfffc000) KB 2025-12-04T10:16:13.0009388Z Allocatable: TRUE 2025-12-04T10:16:13.0009898Z Alloc Granule: 4KB 2025-12-04T10:16:13.0010436Z Alloc Recommended Granule:2048KB 2025-12-04T10:16:13.0011028Z Alloc Alignment: 4KB 2025-12-04T10:16:13.0011686Z Accessible by all: FALSE 2025-12-04T10:16:13.0012141Z Pool 3 2025-12-04T10:16:13.0012564Z Segment: GLOBAL; FLAGS: FINE GRAINED 2025-12-04T10:16:13.0013045Z Size: 268419072(0xfffc000) KB 2025-12-04T10:16:13.0013527Z Allocatable: TRUE 2025-12-04T10:16:13.0014036Z Alloc Granule: 4KB 2025-12-04T10:16:13.0014572Z Alloc Recommended Granule:2048KB 2025-12-04T10:16:13.0015115Z Alloc Alignment: 4KB 2025-12-04T10:16:13.0015638Z Accessible by all: FALSE 2025-12-04T10:16:13.0016088Z Pool 4 2025-12-04T10:16:13.0016496Z Segment: GROUP 2025-12-04T10:16:13.0016964Z Size: 64(0x40) KB 2025-12-04T10:16:13.0017454Z Allocatable: FALSE 2025-12-04T10:16:13.0017968Z Alloc Granule: 0KB 2025-12-04T10:16:13.0018505Z Alloc Recommended Granule:0KB 2025-12-04T10:16:13.0019042Z Alloc Alignment: 0KB 2025-12-04T10:16:13.0019565Z Accessible by all: FALSE 2025-12-04T10:16:13.0020024Z ISA Info: 2025-12-04T10:16:13.0020466Z ISA 1 2025-12-04T10:16:13.0020947Z Name: amdgcn-amd-amdhsa--gfx942:sramecc+:xnack- 2025-12-04T10:16:13.0021492Z Machine Models: HSA_MACHINE_MODEL_LARGE 2025-12-04T10:16:13.0022026Z Profiles: HSA_PROFILE_BASE 2025-12-04T10:16:13.0022555Z Default Rounding Mode: NEAR 2025-12-04T10:16:13.0023103Z Default Rounding Mode: NEAR 2025-12-04T10:16:13.0023607Z Fast f16: TRUE 2025-12-04T10:16:13.0024114Z Workgroup Max Size: 1024(0x400) 2025-12-04T10:16:13.0024595Z Workgroup Max Size per Dimension: 2025-12-04T10:16:13.0025021Z x 1024(0x400) 2025-12-04T10:16:13.0025450Z y 1024(0x400) 2025-12-04T10:16:13.0025881Z z 1024(0x400) 2025-12-04T10:16:13.0026347Z Grid Max Size: 4294967295(0xffffffff) 2025-12-04T10:16:13.0026807Z Grid Max Size per Dimension: 2025-12-04T10:16:13.0027208Z x 2147483647(0x7fffffff) 2025-12-04T10:16:13.0027641Z y 65535(0xffff) 2025-12-04T10:16:13.0028076Z z 65535(0xffff) 2025-12-04T10:16:13.0028556Z FBarrier Max Size: 32 2025-12-04T10:16:13.0029007Z ISA 2 2025-12-04T10:16:13.0029466Z Name: amdgcn-amd-amdhsa--gfx9-4-generic:sramecc+:xnack- 2025-12-04T10:16:13.0030037Z Machine Models: HSA_MACHINE_MODEL_LARGE 2025-12-04T10:16:13.0030568Z Profiles: HSA_PROFILE_BASE 2025-12-04T10:16:13.0031158Z Default Rounding Mode: NEAR 2025-12-04T10:16:13.0031698Z Default Rounding Mode: NEAR 2025-12-04T10:16:13.0032205Z Fast f16: TRUE 2025-12-04T10:16:13.0032706Z Workgroup Max Size: 1024(0x400) 2025-12-04T10:16:13.0033180Z Workgroup Max Size per Dimension: 2025-12-04T10:16:13.0033705Z x 1024(0x400) 2025-12-04T10:16:13.0034133Z y 1024(0x400) 2025-12-04T10:16:13.0034554Z z 1024(0x400) 2025-12-04T10:16:13.0035016Z Grid Max Size: 4294967295(0xffffffff) 2025-12-04T10:16:13.0035472Z Grid Max Size per Dimension: 2025-12-04T10:16:13.0035872Z x 2147483647(0x7fffffff) 2025-12-04T10:16:13.0036299Z y 65535(0xffff) 2025-12-04T10:16:13.0036729Z z 65535(0xffff) 2025-12-04T10:16:13.0037205Z FBarrier Max Size: 32 2025-12-04T10:16:13.0037653Z *** Done *** 2025-12-04T10:16:13.0038000Z + rocminfo 2025-12-04T10:16:13.0038325Z + grep -E 'Name:.*\sgfx|Marketing' 2025-12-04T10:16:13.0634426Z Marketing Name: AMD EPYC 9575F 64-Core Processor 2025-12-04T10:16:13.0635076Z Marketing Name: AMD EPYC 9575F 64-Core Processor 2025-12-04T10:16:13.0635607Z Name: gfx942 2025-12-04T10:16:13.0636100Z Marketing Name: 2025-12-04T10:16:13.0636586Z Name: gfx942 2025-12-04T10:16:13.0637064Z Marketing Name: 2025-12-04T10:16:13.0637541Z Name: gfx942 2025-12-04T10:16:13.0638209Z Marketing Name: 2025-12-04T10:16:13.0638688Z Name: gfx942 2025-12-04T10:16:13.0639164Z Marketing Name: 2025-12-04T10:16:13.0728276Z + MAYBE_ROCM=rocm/ 2025-12-04T10:16:13.0728708Z + [[ linux-jammy-rocm-py3.10 == *xpu* ]] 2025-12-04T10:16:13.0729203Z + [[ linux-jammy-rocm-py3.10 != *-bazel-* ]] 2025-12-04T10:16:13.0729649Z + pip_install ninja==1.10.2 2025-12-04T10:16:13.0730156Z + pip_install_pkg='python3 -m pip install --progress-bar off' 2025-12-04T10:16:13.0730962Z + python3 -m pip install --progress-bar off ninja==1.10.2 2025-12-04T10:16:13.2671678Z Collecting ninja==1.10.2 2025-12-04T10:16:13.2922331Z Downloading ninja-1.10.2-py2.py3-none-manylinux_2_5_x86_64.manylinux1_x86_64.whl.metadata (5.0 kB) 2025-12-04T10:16:13.3002533Z Downloading ninja-1.10.2-py2.py3-none-manylinux_2_5_x86_64.manylinux1_x86_64.whl (108 kB) 2025-12-04T10:16:13.4605106Z Installing collected packages: ninja 2025-12-04T10:16:13.4605695Z Attempting uninstall: ninja 2025-12-04T10:16:13.4607295Z Found existing installation: ninja 1.11.1.4 2025-12-04T10:16:13.4617118Z Uninstalling ninja-1.11.1.4: 2025-12-04T10:16:13.4646257Z Successfully uninstalled ninja-1.11.1.4 2025-12-04T10:16:13.4750466Z Successfully installed ninja-1.10.2 2025-12-04T10:16:13.5068552Z + export PATH=/var/lib/jenkins/.local/bin:/opt/cache/bin:/opt/rocm/llvm/bin:/opt/rocm/opencl/bin:/opt/rocm/hip/bin:/opt/rocm/hcc/bin:/opt/rocm/bin:/opt/conda/envs/py_3.10/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2025-12-04T10:16:13.5071652Z + PATH=/var/lib/jenkins/.local/bin:/opt/cache/bin:/opt/rocm/llvm/bin:/opt/rocm/opencl/bin:/opt/rocm/hip/bin:/opt/rocm/hcc/bin:/opt/rocm/bin:/opt/conda/envs/py_3.10/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2025-12-04T10:16:13.5073103Z + [[ linux-jammy-rocm-py3.10 == *aarch64* ]] 2025-12-04T10:16:13.5073757Z + [[ linux-jammy-rocm-py3.10 == *asan* ]] 2025-12-04T10:16:13.5074338Z + [[ linux-jammy-rocm-py3.10 == *-debug* ]] 2025-12-04T10:16:13.5074777Z + [[ linux-jammy-rocm-py3.10 != *-bazel-* ]] 2025-12-04T10:16:13.5075407Z + echo 'We are not in debug mode: linux-jammy-rocm-py3.10. Expect the assertion to pass' 2025-12-04T10:16:13.5076166Z We are not in debug mode: linux-jammy-rocm-py3.10. Expect the assertion to pass 2025-12-04T10:16:13.5077192Z + cd test 2025-12-04T10:16:13.5077652Z + python -c 'import torch; torch._C._crash_if_debug_asserts_fail(424242)' 2025-12-04T10:16:14.3988350Z + [[ distributed == \n\o\g\p\u\_\N\O\_\A\V\X\2 ]] 2025-12-04T10:16:14.3988947Z + [[ distributed == \n\o\g\p\u\_\A\V\X\5\1\2 ]] 2025-12-04T10:16:14.3989464Z + [[ distributed == \l\e\g\a\c\y\_\n\v\i\d\i\a\_\d\r\i\v\e\r ]] 2025-12-04T10:16:14.3994668Z + DYNAMO_BENCHMARK_FLAGS=() 2025-12-04T10:16:14.3995430Z + [[ distributed == *pr_time_benchmarks* ]] 2025-12-04T10:16:14.3995926Z + [[ distributed == *dynamo_eager* ]] 2025-12-04T10:16:14.3996376Z + [[ distributed == *aot_eager* ]] 2025-12-04T10:16:14.3996769Z + [[ distributed == *aot_inductor* ]] 2025-12-04T10:16:14.3997184Z + [[ distributed == *max_autotune_inductor* ]] 2025-12-04T10:16:14.3997601Z + [[ distributed == *inductor* ]] 2025-12-04T10:16:14.3997983Z + [[ distributed == *dynamic* ]] 2025-12-04T10:16:14.3998356Z + [[ distributed == *cpu* ]] 2025-12-04T10:16:14.3998715Z + [[ distributed == *xpu* ]] 2025-12-04T10:16:14.3999151Z + DYNAMO_BENCHMARK_FLAGS+=(--device cuda) 2025-12-04T10:16:14.4028045Z + [[ linux-jammy-rocm-py3.10 == *libtorch* ]] 2025-12-04T10:16:14.4028505Z + [[ linux-jammy-rocm-py3.10 == *-bazel-* ]] 2025-12-04T10:16:14.4033316Z + cd test 2025-12-04T10:16:14.4034112Z + python -c 'import torch; print(torch.__config__.show())' 2025-12-04T10:16:15.1473900Z PyTorch built with: 2025-12-04T10:16:15.1474389Z - GCC 11.4 2025-12-04T10:16:15.1474722Z - C++ Version: 201703 2025-12-04T10:16:15.1475472Z - Intel(R) oneAPI Math Kernel Library Version 2024.2-Product Build 20240605 for Intel(R) 64 architecture applications 2025-12-04T10:16:15.1477066Z - Intel(R) MKL-DNN v3.7.1 (Git Hash 8d263e693366ef8db40acc569cc7d8edf644556d) 2025-12-04T10:16:15.1477637Z - OpenMP 201511 (a.k.a. OpenMP 4.5) 2025-12-04T10:16:15.1478088Z - LAPACK is enabled (usually provided by MKL) 2025-12-04T10:16:15.1478511Z - NNPACK is enabled 2025-12-04T10:16:15.1478872Z - CPU capability usage: AVX512 2025-12-04T10:16:15.1479272Z - HIP Runtime 7.1.25424 2025-12-04T10:16:15.1479613Z - MIOpen 3.5.1 2025-12-04T10:16:15.1479915Z - Magma 2.9.0 2025-12-04T10:16:15.1485460Z - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, COMMIT_SHA=35b7a9a26c5923d98aebaa41a031dae21788a9ee, CXX_COMPILER=/opt/cache/bin/c++, CXX_FLAGS= -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_FBGEMM_GENAI -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -DC10_NODEPRECATED -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=range-loop-construct -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=old-style-cast -faligned-new -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, TORCH_VERSION=2.10.0, USE_CUDA=OFF, USE_CUDNN=OFF, USE_CUSPARSELT=OFF, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=ON, USE_ROCM_KERNEL_ASSERT=OFF, USE_XCCL=OFF, USE_XPU=OFF, 2025-12-04T10:16:15.1490967Z 2025-12-04T10:16:15.3765921Z + cd test 2025-12-04T10:16:15.3766454Z + python -c 'import torch; print(torch.__config__.parallel_info())' 2025-12-04T10:16:16.0097989Z ATen/Parallel: 2025-12-04T10:16:16.0098441Z at::get_num_threads() : 128 2025-12-04T10:16:16.0098938Z at::get_num_interop_threads() : 128 2025-12-04T10:16:16.0099370Z OpenMP 201511 (a.k.a. OpenMP 4.5) 2025-12-04T10:16:16.0099764Z omp_get_max_threads() : 128 2025-12-04T10:16:16.0100500Z Intel(R) oneAPI Math Kernel Library Version 2024.2-Product Build 20240605 for Intel(R) 64 architecture applications 2025-12-04T10:16:16.0101413Z mkl_get_max_threads() : 128 2025-12-04T10:16:16.0101947Z Intel(R) MKL-DNN v3.7.1 (Git Hash 8d263e693366ef8db40acc569cc7d8edf644556d) 2025-12-04T10:16:16.0102828Z std::thread::hardware_concurrency() : 128 2025-12-04T10:16:16.0103244Z Environment variables: 2025-12-04T10:16:16.0103595Z OMP_NUM_THREADS : [not set] 2025-12-04T10:16:16.0103950Z MKL_NUM_THREADS : [not set] 2025-12-04T10:16:16.0104315Z ATen parallel backend: OpenMP 2025-12-04T10:16:16.0104558Z 2025-12-04T10:16:16.1981907Z + [[ distributed == *numpy_2* ]] 2025-12-04T10:16:16.1982415Z + [[ linux-jammy-rocm-py3.10 == *aarch64* ]] 2025-12-04T10:16:16.1982901Z + [[ distributed == *backward* ]] 2025-12-04T10:16:16.1983377Z + [[ distributed == *libtorch_agnostic_targetting* ]] 2025-12-04T10:16:16.1983839Z + [[ distributed == *xla* ]] 2025-12-04T10:16:16.1984206Z + [[ distributed == *vllm* ]] 2025-12-04T10:16:16.1984589Z + [[ distributed == *executorch* ]] 2025-12-04T10:16:16.1984998Z + [[ distributed == \j\i\t\_\l\e\g\a\c\y ]] 2025-12-04T10:16:16.1985426Z + [[ distributed == \q\u\a\n\t\i\z\a\t\i\o\n ]] 2025-12-04T10:16:16.1985876Z + [[ linux-jammy-rocm-py3.10 == *libtorch* ]] 2025-12-04T10:16:16.1986313Z + [[ distributed == distributed ]] 2025-12-04T10:16:16.1986687Z + test_distributed 2025-12-04T10:16:16.1987040Z + echo 'Testing distributed python tests' 2025-12-04T10:16:16.1987467Z Testing distributed python tests 2025-12-04T10:16:16.1987994Z + python test/run_test.py --distributed-tests --shard 1 3 --verbose 2025-12-04T10:16:17.8442413Z Excluding distributed/rpc/test_faulty_agent on ROCm 2025-12-04T10:16:17.8443059Z Excluding distributed/rpc/test_tensorpipe_agent on ROCm 2025-12-04T10:16:17.8444306Z Excluding distributed/rpc/test_share_memory on ROCm 2025-12-04T10:16:17.8444889Z Excluding distributed/rpc/cuda/test_tensorpipe_agent on ROCm 2025-12-04T10:16:18.8127836Z Downloading https://ossci-metrics.s3.amazonaws.com/disabled-tests-condensed.json to /var/lib/jenkins/pytorch/test/.pytorch-disabled-tests.json 2025-12-04T10:16:19.1749764Z Ignoring disabled issues: [''] 2025-12-04T10:16:19.1893782Z Found test times from artifacts 2025-12-04T10:16:19.2321617Z Found test times from artifacts 2025-12-04T10:16:19.2331498Z Running all tests 2025-12-04T10:16:19.2404991Z Running parallel tests on 1 processes 2025-12-04T10:16:19.2406403Z Name: tests to run (est. time: 169.35min) 2025-12-04T10:16:19.2406847Z Serial tests (74): 2025-12-04T10:16:19.2407376Z distributed/test_inductor_collectives 1/2 2025-12-04T10:16:19.2407859Z distributed/tensor/test_dtensor_export 1/1 2025-12-04T10:16:19.2408401Z distributed/algorithms/quantization/test_quantization 1/1 2025-12-04T10:16:19.2409001Z distributed/algorithms/ddp_comm_hooks/test_ddp_hooks 1/1 2025-12-04T10:16:19.2409547Z distributed/tensor/debug/test_op_coverage 1/1 2025-12-04T10:16:19.2410087Z distributed/tensor/parallel/test_micro_pipeline_tp 1/1 2025-12-04T10:16:19.2410591Z distributed/_tools/test_mod_tracker 1/1 2025-12-04T10:16:19.2411134Z distributed/_shard/sharded_tensor/test_logger 1/1 2025-12-04T10:16:19.2411622Z distributed/tensor/test_dtensor_compile 1/4 2025-12-04T10:16:19.2412090Z distributed/tensor/test_dtensor_compile 4/4 2025-12-04T10:16:19.2412534Z distributed/tensor/test_dtensor 2/3 2025-12-04T10:16:19.2413002Z distributed/test_aten_comm_compute_reordering 2/3 2025-12-04T10:16:19.2413463Z distributed/tensor/test_dynamic 1/1 2025-12-04T10:16:19.2413904Z distributed/checkpoint/e2e/test_fsdp_ep 1/1 2025-12-04T10:16:19.2414369Z distributed/pipelining/test_unflatten 1/1 2025-12-04T10:16:19.2414832Z distributed/tensor/test_dtensor_testbase 1/1 2025-12-04T10:16:19.2415294Z distributed/tensor/test_redistribute 1/2 2025-12-04T10:16:19.2415741Z distributed/tensor/test_tensor_ops 2/4 2025-12-04T10:16:19.2416167Z distributed/test_nvshmem 1/1 2025-12-04T10:16:19.2416565Z distributed/tensor/test_attention 1/1 2025-12-04T10:16:19.2416983Z distributed/test_device_mesh 2/2 2025-12-04T10:16:19.2417402Z distributed/tensor/test_dtensor_ops 1/1 2025-12-04T10:16:19.2417840Z distributed/checkpoint/test_fsspec 1/1 2025-12-04T10:16:19.2418907Z distributed/tensor/experimental/test_tp_transform 1/1 2025-12-04T10:16:19.2419424Z distributed/_composable/test_checkpoint 1/1 2025-12-04T10:16:19.2419892Z distributed/_tools/test_fsdp2_mem_tracker 1/1 2025-12-04T10:16:19.2420355Z distributed/tensor/test_embedding_ops 1/1 2025-12-04T10:16:19.2420896Z distributed/checkpoint/test_fsdp_optim_state 1/1 2025-12-04T10:16:19.2421423Z distributed/checkpoint/e2e/test_e2e_save_and_load 1/1 2025-12-04T10:16:19.2421947Z distributed/checkpoint/test_dtensor_resharding 1/1 2025-12-04T10:16:19.2422508Z distributed/_composable/test_replicate_with_compiler 1/1 2025-12-04T10:16:19.2423099Z distributed/_composable/fsdp/test_fully_shard_autograd 1/1 2025-12-04T10:16:19.2423685Z distributed/_composable/fsdp/test_fully_shard_compile 1/1 2025-12-04T10:16:19.2424193Z distributed/_pycute/test_coalesce 1/1 2025-12-04T10:16:19.2424629Z distributed/_pycute/test_complement 1/1 2025-12-04T10:16:19.2425073Z distributed/_pycute/test_composition 1/1 2025-12-04T10:16:19.2425518Z distributed/_pycute/test_int_tuple 1/1 2025-12-04T10:16:19.2425952Z distributed/_pycute/test_left_inverse 1/1 2025-12-04T10:16:19.2426401Z distributed/_pycute/test_right_inverse 1/1 2025-12-04T10:16:19.2426858Z distributed/tensor/debug/test_debug_mode 1/1 2025-12-04T10:16:19.2427320Z distributed/_composable/test_replicate 1/1 2025-12-04T10:16:19.2427778Z distributed/checkpoint/test_pg_transport 1/1 2025-12-04T10:16:19.2428342Z distributed/_composable/fsdp/test_fully_shard_mixed_precision 1/1 2025-12-04T10:16:19.2429075Z distributed/checkpoint/test_utils 1/1 2025-12-04T10:16:19.2429627Z distributed/checkpoint/_experimental/test_checkpoint_process 1/1 2025-12-04T10:16:19.2430163Z distributed/test_c10d_logger 1/1 2025-12-04T10:16:19.2430698Z distributed/_composable/test_replicate_training 1/1 2025-12-04T10:16:19.2431244Z distributed/optim/test_apply_optimizer_in_backward 1/1 2025-12-04T10:16:19.2431737Z distributed/fsdp/test_fsdp_uneven 1/1 2025-12-04T10:16:19.2432176Z distributed/tensor/test_op_strategy 1/1 2025-12-04T10:16:19.2432609Z distributed/fsdp/test_fsdp_grad_acc 1/1 2025-12-04T10:16:19.2433082Z distributed/checkpoint/test_state_dict_stager 1/1 2025-12-04T10:16:19.2433587Z distributed/fsdp/test_fsdp_freezing_weights 1/1 2025-12-04T10:16:19.2434056Z distributed/_pycute/test_typing 1/1 2025-12-04T10:16:19.2434492Z distributed/test_distributed_spawn 1/7 2025-12-04T10:16:19.2434923Z distributed/test_distributed_spawn 4/7 2025-12-04T10:16:19.2435358Z distributed/test_distributed_spawn 7/7 2025-12-04T10:16:19.2435828Z distributed/fsdp/test_fsdp_sharded_grad_scaler 1/1 2025-12-04T10:16:19.2436369Z distributed/_shard/sharding_plan/test_sharding_plan 1/1 2025-12-04T10:16:19.2436863Z distributed/fsdp/test_fsdp_comm 1/1 2025-12-04T10:16:19.2437309Z distributed/fsdp/test_fsdp_clip_grad_norm 1/1 2025-12-04T10:16:19.2437759Z distributed/tensor/test_utils 1/1 2025-12-04T10:16:19.2438176Z distributed/test_data_parallel 1/1 2025-12-04T10:16:19.2438665Z distributed/_composable/fsdp/test_fully_shard_memory 1/1 2025-12-04T10:16:19.2439222Z distributed/optim/test_zero_redundancy_optimizer 1/1 2025-12-04T10:16:19.2439697Z distributed/test_c10d_spawn_gloo 1/1 2025-12-04T10:16:19.2440167Z distributed/fsdp/test_distributed_checkpoint 1/1 2025-12-04T10:16:19.2440668Z distributed/test_c10d_spawn_nccl 1/1 2025-12-04T10:16:19.2441118Z distributed/fsdp/test_fsdp_use_orig_params 1/1 2025-12-04T10:16:19.2441647Z distributed/_shard/sharded_tensor/test_sharded_tensor 1/1 2025-12-04T10:16:19.2442154Z distributed/test_launcher 1/1 2025-12-04T10:16:19.2442556Z distributed/test_store 1/1 2025-12-04T10:16:19.2442942Z distributed/test_c10d_nccl 1/2 2025-12-04T10:16:19.2443352Z distributed/elastic/timer/api_test 1/1 2025-12-04T10:16:19.2443757Z Parallel tests (0): 2025-12-04T10:16:19.2444131Z Name: excluded (est. time: 0.0min) 2025-12-04T10:16:19.2444507Z Serial tests (0): 2025-12-04T10:16:19.2444969Z Parallel tests (0): 2025-12-04T10:16:19.2445584Z Running distributed/test_inductor_collectives 1/2 ... [2025-12-04 10:16:19.240751][4968408.090687389] 2025-12-04T10:16:19.2446269Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:16:19.2447641Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/test_inductor_collectives.py', '--shard-id=1', '--num-shards=2', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:16:19.240974] 2025-12-04T10:19:57.1935489Z 2025-12-04T10:19:57.1937123Z distributed/test_inductor_collectives 1/2 was successful, full logs can be found in artifacts with path test/test-reports/distributed.test_inductor_collectives_1.2_a200c867738a4cda_.log 2025-12-04T10:19:57.1960952Z Running 41 items in this shard: test/distributed/test_inductor_collectives.py::TestCollectivesMultiProc::test_all_to_all_recompute_is_always_banned_override_with_ac_True, test/distributed/test_inductor_collectives.py::TestCollectivesMultiProc::test_all_to_all_single_inductor, test/distributed/test_inductor_collectives.py::TestCollectivesMultiProc::test_allgather_contiguous_input, test/distributed/test_inductor_collectives.py::TestCollectivesMultiProc::test_allgather_into_tensor_inductor, test/distributed/test_inductor_collectives.py::TestCollectivesMultiProc::test_allgather_output_buffer_reuse, test/distributed/test_inductor_collectives.py::TestCollectivesMultiProc::test_allgather_scalar_tensor_input, test/distributed/test_inductor_collectives.py::TestCollectivesMultiProc::test_allreduce_inductor, test/distributed/test_inductor_collectives.py::TestCollectivesMultiProc::test_allreduce_inductor_cudagraph_trees, test/distributed/test_inductor_collectives.py::TestCollectivesMultiProc::test_allreduce_input_buffer_reuse, test/distributed/test_inductor_collectives.py::TestCollectivesMultiProc::test_broadcast_inductor, test/distributed/test_inductor_collectives.py::TestCollectivesMultiProc::test_c10d_functional_tagged_pt2_compliant, test/distributed/test_inductor_collectives.py::TestCollectivesMultiProc::test_permute_tensor, test/distributed/test_inductor_collectives.py::TestCollectivesInductor::test_backwards, test/distributed/test_inductor_collectives.py::TestCollectivesInductor::test_dynamo_get_world_group_source__get_default_group, test/distributed/test_inductor_collectives.py::TestCollectivesInductor::test_dynamo_get_world_group_source_group_WORLD, test/distributed/test_inductor_collectives.py::TestCollectivesInductor::test_dynamo_graphbreaks_unsupported_async_op, test/distributed/test_inductor_collectives.py::TestCollectivesInductor::test_dynamo_rewrite_dist_all_gather, test/distributed/test_inductor_collectives.py::TestCollectivesInductor::test_dynamo_rewrite_dist_all_gather_args_match, test/distributed/test_inductor_collectives.py::TestCollectivesInductor::test_dynamo_rewrite_dist_all_to_all_single, test/distributed/test_inductor_collectives.py::TestCollectivesInductor::test_dynamo_rewrite_dist_allreduce_pg_mode_kwargs, test/distributed/test_inductor_collectives.py::TestCollectivesInductor::test_dynamo_rewrite_dist_allreduce_pg_mode_positional, test/distributed/test_inductor_collectives.py::TestCollectivesInductor::test_dynamo_rewrite_dist_allreduce_pg_mode_unspecified, test/distributed/test_inductor_collectives.py::TestCollectivesInductor::test_dynamo_rewrite_dist_allreduce_reduce_op_reduce_op1, test/distributed/test_inductor_collectives.py::TestCollectivesInductor::test_dynamo_rewrite_dist_allreduce_reduce_op_reduce_op2, test/distributed/test_inductor_collectives.py::TestCollectivesInductor::test_dynamo_rewrite_dist_allreduce_reduce_op_reduce_op4, test/distributed/test_inductor_collectives.py::TestCollectivesInductor::test_dynamo_rewrite_dist_reduce_scatter, test/distributed/test_inductor_collectives.py::TestCollectivesInductor::test_dynamo_trace_all_gather_tensor_pg, test/distributed/test_inductor_collectives.py::TestCollectivesInductor::test_dynamo_trace_allgather_coalesced, test/distributed/test_inductor_collectives.py::TestCollectivesInductor::test_dynamo_trace_allreduce, test/distributed/test_inductor_collectives.py::TestCollectivesInductor::test_dynamo_trace_reduce_scatter_tensor, test/distributed/test_inductor_collectives.py::TestCollectivesInductor::test_inductor_reduce_scatter_coalesced, test/distributed/test_inductor_collectives.py::TestCollectivesInductor::test_meta, test/distributed/test_inductor_collectives.py::TestCollectivesInductor::test_reduce_scatter_bucket_bucket_mode_all, test/distributed/test_inductor_collectives.py::TestCollectivesInductor::test_reorder_peak_memory, test/distributed/test_inductor_collectives.py::TestCollectivesInductor::test_reorder_peak_memory_bucketed_bucket_mode_all, test/distributed/test_inductor_collectives.py::TestCollectivesInductor::test_reorder_peak_memory_bucketed_bucket_mode_all_custom_ops, test/distributed/test_inductor_collectives.py::TestSyncDecisionCrossRanks::test_all_gather_comm_analysis, test/distributed/test_inductor_collectives.py::TestSyncDecisionCrossRanks::test_all_to_all_comm_analysis, test/distributed/test_inductor_collectives.py::TestSyncDecisionCrossRanks::test_reduce_scatter_comm_analysis, test/distributed/test_inductor_collectives.py::TestSyncDecisionCrossRanks::test_regression_use_nccl_estimate_with_gloo, test/distributed/test_inductor_collectives.py::TestSyncDecisionCrossRanks::test_sync_decision_cross_ranks 2025-12-04T10:19:57.1983170Z 2025-12-04T10:19:57.1983645Z Finished distributed/test_inductor_collectives 1/2 ... [2025-12-04 10:19:57.193482][4968626.043414428], took 3.63min 2025-12-04T10:19:57.1985250Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:19:59.3726821Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:19:59.3727687Z GITHUB_RUN_ID, GITHUB_RUN_ATTEMPT, or ARTIFACTS_FILE_SUFFIX not set, not uploading 2025-12-04T10:19:59.3728369Z Uploading artifacts took 0.00 seconds 2025-12-04T10:19:59.3729064Z Running distributed/tensor/test_dtensor_export 1/1 ... [2025-12-04 10:19:59.372319][4968628.222253667] 2025-12-04T10:19:59.3729759Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:19:59.3731399Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/tensor/test_dtensor_export.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:19:59.372593] 2025-12-04T10:20:05.8497537Z 2025-12-04T10:20:05.8499175Z distributed/tensor/test_dtensor_export 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.tensor.test_dtensor_export_1.1_fd9ca5c59b56d53d_.log 2025-12-04T10:20:05.8505289Z Running 9 items in this shard: test/distributed/tensor/test_dtensor_export.py::DTensorExportTest::test_annotate_aot_export_joint_with_descriptors_alone, test/distributed/tensor/test_dtensor_export.py::DTensorExportTest::test_dtensor_data_dependent_index_and_slice, test/distributed/tensor/test_dtensor_export.py::DTensorExportTest::test_dynamic_shapes_export_fn_with_answer0, test/distributed/tensor/test_dtensor_export.py::DTensorExportTest::test_einsum_dtensor_export_export_fn0, test/distributed/tensor/test_dtensor_export.py::DTensorExportTest::test_export_parallelize_module_with_dtensor_input_export_fn0, test/distributed/tensor/test_dtensor_export.py::DTensorExportTest::test_export_parallelize_module_with_dtensor_input_export_fn1, test/distributed/tensor/test_dtensor_export.py::DTensorExportTest::test_flex_attention_dtensor_export_export_fn0, test/distributed/tensor/test_dtensor_export.py::DTensorExportTest::test_strict_export_parallelize_module_with_dtensor_input, test/distributed/tensor/test_dtensor_export.py::DTensorExportTest::test_union_typed_annotation 2025-12-04T10:20:05.8510525Z 2025-12-04T10:20:05.8511943Z Finished distributed/tensor/test_dtensor_export 1/1 ... [2025-12-04 10:20:05.849273][4968634.699205032], took 0.11min 2025-12-04T10:20:05.8513451Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:20:05.8532402Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:20:05.8538308Z Running distributed/algorithms/quantization/test_quantization 1/1 ... [2025-12-04 10:20:05.853564][4968634.70349844] 2025-12-04T10:20:05.8540092Z MPI not available -- MPI backend tests will be skipped 2025-12-04T10:20:05.8541771Z Running distributed tests for the test backend with env init_method 2025-12-04T10:20:05.8544186Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:20:05.8548841Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/algorithms/quantization/test_quantization.py', '--shard-id=1', '--num-shards=1', '-v', '--subprocess', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:20:05.854644] 2025-12-04T10:20:07.6283555Z 2025-12-04T10:20:07.6285294Z distributed/algorithms/quantization/test_quantization 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.algorithms.quantization.test_quantization_1.1_732ecee985c5cab5_.log 2025-12-04T10:20:07.6286719Z Running 0 items in this shard: 2025-12-04T10:20:07.6286986Z 2025-12-04T10:20:07.6293881Z Running distributed tests for the test backend with file init_method 2025-12-04T10:20:07.6295113Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:20:07.6300036Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/algorithms/quantization/test_quantization.py', '--shard-id=1', '--num-shards=1', '-v', '--subprocess', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:20:07.629702] 2025-12-04T10:20:09.4189669Z 2025-12-04T10:20:09.4190443Z distributed/algorithms/quantization/test_quantization 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.algorithms.quantization.test_quantization_1.1_1011a1d54aca202d_.log 2025-12-04T10:20:09.4191391Z Running 0 items in this shard: 2025-12-04T10:20:09.4191555Z 2025-12-04T10:20:09.4200365Z Running distributed tests for the nccl backend with env init_method 2025-12-04T10:20:09.4202070Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:20:09.4206628Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/algorithms/quantization/test_quantization.py', '--shard-id=1', '--num-shards=1', '-v', '--subprocess', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:20:09.420460] 2025-12-04T10:20:36.5750403Z 2025-12-04T10:20:36.5752446Z distributed/algorithms/quantization/test_quantization 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.algorithms.quantization.test_quantization_1.1_f50e0e27d9c63b1e_.log 2025-12-04T10:20:36.5757033Z Running 6 items in this shard: test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_gather_bfp16, test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_gather_fp16, test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_bfp16, test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_fp16, test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_single_bfp16, test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_single_fp16 2025-12-04T10:20:36.5761175Z Running 1 items in this shard: test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_gather_bfp16 2025-12-04T10:20:36.5763318Z Running 1 items in this shard: test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_gather_fp16 2025-12-04T10:20:36.5764636Z Running 1 items in this shard: test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_bfp16 2025-12-04T10:20:36.5766120Z Running 1 items in this shard: test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_fp16 2025-12-04T10:20:36.5767460Z Running 1 items in this shard: test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_single_bfp16 2025-12-04T10:20:36.5768833Z Running 1 items in this shard: test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_single_fp16 2025-12-04T10:20:36.5769584Z 2025-12-04T10:20:36.5769895Z Running distributed tests for the nccl backend with file init_method 2025-12-04T10:20:36.5770463Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:20:36.5772054Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/algorithms/quantization/test_quantization.py', '--shard-id=1', '--num-shards=1', '-v', '--subprocess', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:20:36.576542] 2025-12-04T10:21:03.5859991Z 2025-12-04T10:21:03.5861571Z distributed/algorithms/quantization/test_quantization 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.algorithms.quantization.test_quantization_1.1_37e3ebac5e97e898_.log 2025-12-04T10:21:03.5867106Z Running 6 items in this shard: test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_gather_bfp16, test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_gather_fp16, test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_bfp16, test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_fp16, test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_single_bfp16, test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_single_fp16 2025-12-04T10:21:03.5871190Z Running 1 items in this shard: test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_gather_bfp16 2025-12-04T10:21:03.5872512Z Running 1 items in this shard: test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_gather_fp16 2025-12-04T10:21:03.5873816Z Running 1 items in this shard: test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_bfp16 2025-12-04T10:21:03.5875109Z Running 1 items in this shard: test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_fp16 2025-12-04T10:21:03.5879312Z Running 1 items in this shard: test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_single_bfp16 2025-12-04T10:21:03.5880771Z Running 1 items in this shard: test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_single_fp16 2025-12-04T10:21:03.5881513Z 2025-12-04T10:21:03.5881823Z Running distributed tests for the gloo backend with env init_method 2025-12-04T10:21:03.5882381Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:21:03.5883895Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/algorithms/quantization/test_quantization.py', '--shard-id=1', '--num-shards=1', '-v', '--subprocess', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:21:03.587587] 2025-12-04T10:21:21.6834812Z 2025-12-04T10:21:21.6837304Z distributed/algorithms/quantization/test_quantization 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.algorithms.quantization.test_quantization_1.1_9675cb544f230ae8_.log 2025-12-04T10:21:21.6842040Z Running 6 items in this shard: test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_gather_bfp16, test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_gather_fp16, test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_bfp16, test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_fp16, test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_single_bfp16, test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_single_fp16 2025-12-04T10:21:21.6846082Z Running 1 items in this shard: test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_gather_bfp16 2025-12-04T10:21:21.6847425Z Running 1 items in this shard: test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_gather_fp16 2025-12-04T10:21:21.6848748Z Running 1 items in this shard: test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_bfp16 2025-12-04T10:21:21.6850050Z Running 1 items in this shard: test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_fp16 2025-12-04T10:21:21.6851476Z Running 1 items in this shard: test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_single_bfp16 2025-12-04T10:21:21.6853037Z Running 1 items in this shard: test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_single_fp16 2025-12-04T10:21:21.6853787Z 2025-12-04T10:21:21.6854103Z Running distributed tests for the gloo backend with file init_method 2025-12-04T10:21:21.6854679Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:21:21.6856217Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/algorithms/quantization/test_quantization.py', '--shard-id=1', '--num-shards=1', '-v', '--subprocess', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:21:21.685091] 2025-12-04T10:21:39.6885760Z 2025-12-04T10:21:39.6886900Z distributed/algorithms/quantization/test_quantization 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.algorithms.quantization.test_quantization_1.1_6d948ce8745603ed_.log 2025-12-04T10:21:39.6889926Z Running 6 items in this shard: test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_gather_bfp16, test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_gather_fp16, test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_bfp16, test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_fp16, test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_single_bfp16, test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_single_fp16 2025-12-04T10:21:39.6894151Z Running 1 items in this shard: test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_gather_bfp16 2025-12-04T10:21:39.6895478Z Running 1 items in this shard: test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_gather_fp16 2025-12-04T10:21:39.6896781Z Running 1 items in this shard: test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_bfp16 2025-12-04T10:21:39.6898399Z Running 1 items in this shard: test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_fp16 2025-12-04T10:21:39.6900686Z Running 1 items in this shard: test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_single_bfp16 2025-12-04T10:21:39.6902076Z Running 1 items in this shard: test/distributed/algorithms/quantization/test_quantization.py::DistQuantizationTests::test_all_to_all_single_fp16 2025-12-04T10:21:39.6902823Z 2025-12-04T10:21:39.6903390Z Finished distributed/algorithms/quantization/test_quantization 1/1 ... [2025-12-04 10:21:39.689323][4968728.539254103], took 1.56min 2025-12-04T10:21:39.6906324Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:21:39.6932968Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:21:39.6939284Z Running distributed/algorithms/ddp_comm_hooks/test_ddp_hooks 1/1 ... [2025-12-04 10:21:39.693761][4968728.543694389] 2025-12-04T10:21:39.6940049Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:21:39.6947020Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/algorithms/ddp_comm_hooks/test_ddp_hooks.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:21:39.694203] 2025-12-04T10:22:23.4879157Z 2025-12-04T10:22:23.4880415Z distributed/algorithms/ddp_comm_hooks/test_ddp_hooks 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.algorithms.ddp_comm_hooks.test_ddp_hooks_1.1_900d45ba1363c91e_.log 2025-12-04T10:22:23.4885617Z Running 6 items in this shard: test/distributed/algorithms/ddp_comm_hooks/test_ddp_hooks.py::DistributedDataParallelCommHookTest::test_ddp_comm_hook_allreduce_hook, test/distributed/algorithms/ddp_comm_hooks/test_ddp_hooks.py::DistributedDataParallelCommHookTest::test_ddp_comm_hook_fp16compress_hook, test/distributed/algorithms/ddp_comm_hooks/test_ddp_hooks.py::DistributedDataParallelCommHookTest::test_ddp_comm_hook_noop_hook, test/distributed/algorithms/ddp_comm_hooks/test_ddp_hooks.py::DistributedDataParallelCommHookTest::test_ddp_comm_hook_quantize_per_channel_hook, test/distributed/algorithms/ddp_comm_hooks/test_ddp_hooks.py::DistributedDataParallelCommHookTest::test_ddp_comm_hook_quantize_per_tensor_hook, test/distributed/algorithms/ddp_comm_hooks/test_ddp_hooks.py::DistributedDataParallelCommHookTest::test_is_last_hook 2025-12-04T10:22:23.4889134Z 2025-12-04T10:22:23.4889674Z Finished distributed/algorithms/ddp_comm_hooks/test_ddp_hooks 1/1 ... [2025-12-04 10:22:23.487446][4968772.337379032], took 0.73min 2025-12-04T10:22:23.4891282Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:22:23.4916495Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:22:23.4921155Z Running distributed/tensor/debug/test_op_coverage 1/1 ... [2025-12-04 10:22:23.491844][4968772.341778409] 2025-12-04T10:22:23.4921849Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:22:23.4925263Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/tensor/debug/test_op_coverage.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:22:23.492286] 2025-12-04T10:22:26.0148878Z 2025-12-04T10:22:26.0149974Z distributed/tensor/debug/test_op_coverage 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.tensor.debug.test_op_coverage_1.1_47d59ad42311bafb_.log 2025-12-04T10:22:26.0150878Z Running 1 items in this shard: test/distributed/tensor/debug/test_op_coverage.py::TestOpCoverage::test_trace_with_inductor_decomp 2025-12-04T10:22:26.0151184Z 2025-12-04T10:22:26.0152137Z Finished distributed/tensor/debug/test_op_coverage 1/1 ... [2025-12-04 10:22:26.014570][4968774.864505944], took 0.04min 2025-12-04T10:22:26.0153887Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:22:26.0165631Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:22:26.0167345Z Running distributed/tensor/parallel/test_micro_pipeline_tp 1/1 ... [2025-12-04 10:22:26.016588][4968774.866525829] 2025-12-04T10:22:26.0168117Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:22:26.0172014Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/tensor/parallel/test_micro_pipeline_tp.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:22:26.016773] 2025-12-04T10:22:48.8229304Z 2025-12-04T10:22:48.8231315Z distributed/tensor/parallel/test_micro_pipeline_tp 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.tensor.parallel.test_micro_pipeline_tp_1.1_c70cc1f9b3f0bf6d_.log 2025-12-04T10:22:48.8262021Z Running 44 items in this shard: test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_dtensor_seq_par_shard_dim_0, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_dtensor_seq_par_shard_dim_1, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_find_all_gather_patterns, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_find_reduce_scatter_patterns, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_all_gather_matmul_A_dims_2_gather_dim_0_return_A_False, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_all_gather_matmul_A_dims_2_gather_dim_0_return_A_True, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_all_gather_matmul_A_dims_2_gather_dim_1_return_A_False, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_all_gather_matmul_A_dims_2_gather_dim_1_return_A_True, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_all_gather_matmul_A_dims_2_gather_dim_2_return_A_False, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_all_gather_matmul_A_dims_2_gather_dim_2_return_A_True, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_all_gather_matmul_A_dims_3_gather_dim_0_return_A_False, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_all_gather_matmul_A_dims_3_gather_dim_0_return_A_True, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_all_gather_matmul_A_dims_3_gather_dim_1_return_A_False, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_all_gather_matmul_A_dims_3_gather_dim_1_return_A_True, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_all_gather_matmul_A_dims_3_gather_dim_2_return_A_False, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_all_gather_matmul_A_dims_3_gather_dim_2_return_A_True, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_all_gather_scaled_matmul_A_dims_2_gather_dim_0_return_A_False, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_all_gather_scaled_matmul_A_dims_2_gather_dim_0_return_A_True, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_all_gather_scaled_matmul_A_dims_2_gather_dim_1_return_A_False, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_all_gather_scaled_matmul_A_dims_2_gather_dim_1_return_A_True, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_all_gather_scaled_matmul_A_dims_2_gather_dim_2_return_A_False, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_all_gather_scaled_matmul_A_dims_2_gather_dim_2_return_A_True, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_all_gather_scaled_matmul_A_dims_3_gather_dim_0_return_A_False, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_all_gather_scaled_matmul_A_dims_3_gather_dim_0_return_A_True, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_all_gather_scaled_matmul_A_dims_3_gather_dim_1_return_A_False, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_all_gather_scaled_matmul_A_dims_3_gather_dim_1_return_A_True, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_all_gather_scaled_matmul_A_dims_3_gather_dim_2_return_A_False, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_all_gather_scaled_matmul_A_dims_3_gather_dim_2_return_A_True, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_matmul_reduce_scatter_A_dims_2_scatter_dim_0, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_matmul_reduce_scatter_A_dims_2_scatter_dim_1, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_matmul_reduce_scatter_A_dims_2_scatter_dim_2, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_matmul_reduce_scatter_A_dims_3_scatter_dim_0, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_matmul_reduce_scatter_A_dims_3_scatter_dim_1, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_matmul_reduce_scatter_A_dims_3_scatter_dim_2, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_scaled_matmul_reduce_scatter_A_dims_2_scatter_dim_0, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_scaled_matmul_reduce_scatter_A_dims_2_scatter_dim_1, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_scaled_matmul_reduce_scatter_A_dims_2_scatter_dim_2, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_scaled_matmul_reduce_scatter_A_dims_3_scatter_dim_0, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_scaled_matmul_reduce_scatter_A_dims_3_scatter_dim_1, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_scaled_matmul_reduce_scatter_A_dims_3_scatter_dim_2, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_scaled_matmul_reduce_scatter_rowwise_scales_reshape_mm_reshape_scatter_dim_0, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_scaled_matmul_reduce_scatter_rowwise_scales_reshape_mm_reshape_scatter_dim_1, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTPTest::test_get_unexposed_collectives, test/distributed/tensor/parallel/test_micro_pipeline_tp.py::MicroPipelineTP4GPUTest::test_extra_collectives 2025-12-04T10:22:48.8290500Z 2025-12-04T10:22:48.8291070Z Finished distributed/tensor/parallel/test_micro_pipeline_tp 1/1 ... [2025-12-04 10:22:48.822614][4968797.672546028], took 0.38min 2025-12-04T10:22:48.8292588Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:22:48.8293861Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:22:48.8294820Z Running distributed/_tools/test_mod_tracker 1/1 ... [2025-12-04 10:22:48.827088][4968797.677022204] 2025-12-04T10:22:48.8295485Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:22:48.8296808Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/_tools/test_mod_tracker.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:22:48.827527] 2025-12-04T10:22:51.2473510Z 2025-12-04T10:22:51.2475669Z distributed/_tools/test_mod_tracker 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed._tools.test_mod_tracker_1.1_b9b09806a0c4ed58_.log 2025-12-04T10:22:51.2478431Z Running 4 items in this shard: test/distributed/_tools/test_mod_tracker.py::TestModTracker::test_ac, test/distributed/_tools/test_mod_tracker.py::TestModTracker::test_bw_detection, test/distributed/_tools/test_mod_tracker.py::TestModTracker::test_module_hierarchy, test/distributed/_tools/test_mod_tracker.py::TestModTracker::test_user_hooks 2025-12-04T10:22:51.2480058Z 2025-12-04T10:22:51.2480506Z Finished distributed/_tools/test_mod_tracker 1/1 ... [2025-12-04 10:22:51.246861][4968800.096793935], took 0.04min 2025-12-04T10:22:51.2484961Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:22:51.2511026Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:22:51.2518873Z Running distributed/_shard/sharded_tensor/test_logger 1/1 ... [2025-12-04 10:22:51.251430][4968800.101363819] 2025-12-04T10:22:51.2520324Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:22:51.2521756Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/_shard/sharded_tensor/test_logger.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:22:51.251883] 2025-12-04T10:22:53.6713580Z 2025-12-04T10:22:53.6714681Z distributed/_shard/sharded_tensor/test_logger 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed._shard.sharded_tensor.test_logger_1.1_f547818a14a8a078_.log 2025-12-04T10:22:53.6716321Z Running 1 items in this shard: test/distributed/_shard/sharded_tensor/test_logger.py::ShardingSpecLoggerTest::test_get_or_create_logger 2025-12-04T10:22:53.6717024Z 2025-12-04T10:22:53.6717507Z Finished distributed/_shard/sharded_tensor/test_logger 1/1 ... [2025-12-04 10:22:53.670991][4968802.520921133], took 0.04min 2025-12-04T10:22:53.6726381Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:22:53.6754818Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:22:53.6761610Z Running distributed/tensor/test_dtensor_compile 1/4 ... [2025-12-04 10:22:53.675948][4968802.525881663] 2025-12-04T10:22:53.6762303Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:22:53.6769330Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/tensor/test_dtensor_compile.py', '--shard-id=1', '--num-shards=4', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:22:53.676431] 2025-12-04T10:24:28.4617853Z 2025-12-04T10:24:28.4619426Z distributed/tensor/test_dtensor_compile 1/4 was successful, full logs can be found in artifacts with path test/test-reports/distributed.tensor.test_dtensor_compile_1.4_d4ff98c8adaf2800_.log 2025-12-04T10:24:28.4627292Z Running 11 items in this shard: test/distributed/tensor/test_dtensor_compile.py::TestDTensorCompile::test_dtensor_different_gradient_placement, test/distributed/tensor/test_dtensor_compile.py::TestDTensorCompile::test_dtensor_requires_grad_recompile, test/distributed/tensor/test_dtensor_compile.py::TestDTensorCompile::test_dynamo_dtensor_from_local, test/distributed/tensor/test_dtensor_compile.py::TestDTensorCompile::test_dynamo_dtensor_from_local_redistribute_async, test/distributed/tensor/test_dtensor_compile.py::TestDTensorCompile::test_dynamo_to_local_kwargs, test/distributed/tensor/test_dtensor_compile.py::TestDTensorCompile::test_graph_input_is_async, test/distributed/tensor/test_dtensor_compile.py::TestDTensorCompile::test_unwrap_async_collective_tensor_tangent, test/distributed/tensor/test_dtensor_compile.py::TestDTensorCompileE2E::test_2d_fsdp_tp_ac_compile_use_ca_False, test/distributed/tensor/test_dtensor_compile.py::TestDTensorCompileE2E::test_2d_fsdp_tp_compile_use_ca_False, test/distributed/tensor/test_dtensor_compile.py::TestDTensorCompileE2E::test_tp_compile_fullgraph_is_seq_parallel_True_use_ca_False, test/distributed/tensor/test_dtensor_compile.py::TestDTensorCompileE2E::test_tp_compile_fullgraph_is_seq_parallel_True_use_ca_True 2025-12-04T10:24:28.4633509Z 2025-12-04T10:24:28.4633980Z Finished distributed/tensor/test_dtensor_compile 1/4 ... [2025-12-04 10:24:28.461420][4968897.311351423], took 1.58min 2025-12-04T10:24:28.4635463Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:24:28.4659274Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:24:28.4665702Z Running distributed/tensor/test_dtensor_compile 4/4 ... [2025-12-04 10:24:28.466317][4968897.316250834] 2025-12-04T10:24:28.4666699Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:24:28.4670885Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/tensor/test_dtensor_compile.py', '--shard-id=4', '--num-shards=4', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:24:28.466789] 2025-12-04T10:25:18.2729977Z 2025-12-04T10:25:18.2731769Z distributed/tensor/test_dtensor_compile 4/4 was successful, full logs can be found in artifacts with path test/test-reports/distributed.tensor.test_dtensor_compile_4.4_9fac37c1fd1d2a4f_.log 2025-12-04T10:25:18.2739805Z Running 12 items in this shard: test/distributed/tensor/test_dtensor_compile.py::TestDTensorCompile::test_device_mesh_compile, test/distributed/tensor/test_dtensor_compile.py::TestDTensorCompile::test_dtensor_attribute_access_on_intermediate, test/distributed/tensor/test_dtensor_compile.py::TestDTensorCompile::test_dtensor_constructor_w_dynamo_disable, test/distributed/tensor/test_dtensor_compile.py::TestDTensorCompile::test_dtensor_constructor_w_graph_break, test/distributed/tensor/test_dtensor_compile.py::TestDTensorCompile::test_dtensor_contiguous_dtensor_noncontiguous_local_as_tangent, test/distributed/tensor/test_dtensor_compile.py::TestDTensorCompile::test_dtensor_dont_recompile_on_same_placement_devicemesh, test/distributed/tensor/test_dtensor_compile.py::TestDTensorCompile::test_dtensor_dynamic, test/distributed/tensor/test_dtensor_compile.py::TestDTensorCompile::test_dynamo_dtensor_recompile, test/distributed/tensor/test_dtensor_compile.py::TestDTensorCompile::test_dynamo_from_local_grad_placements_sequence_intermediate_as_args, test/distributed/tensor/test_dtensor_compile.py::TestDTensorCompile::test_dynamo_to_local_kwargs_forward_hook, test/distributed/tensor/test_dtensor_compile.py::TestDTensorCompileE2E::test_2d_fsdp_tp_compile_use_ca_True, test/distributed/tensor/test_dtensor_compile.py::TestDTensorCompileE2E::test_compile_dtensor_redistribute_backward_use_ca_False 2025-12-04T10:25:18.2746637Z 2025-12-04T10:25:18.2747116Z Finished distributed/tensor/test_dtensor_compile 4/4 ... [2025-12-04 10:25:18.272631][4968947.122561491], took 0.83min 2025-12-04T10:25:18.2748581Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:25:18.2772697Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:25:18.2778889Z Running distributed/tensor/test_dtensor 2/3 ... [2025-12-04 10:25:18.277648][4968947.127580769] 2025-12-04T10:25:18.2779569Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:25:18.2783818Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/tensor/test_dtensor.py', '--shard-id=2', '--num-shards=3', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:25:18.278146] 2025-12-04T10:26:40.1940595Z 2025-12-04T10:26:40.1942167Z distributed/tensor/test_dtensor 2/3 was successful, full logs can be found in artifacts with path test/test-reports/distributed.tensor.test_dtensor_2.3_f4e8d4597e0476be_.log 2025-12-04T10:26:40.1957666Z Running 33 items in this shard: test/distributed/tensor/test_dtensor.py::DTensorTest::test_dtensor_async_output, test/distributed/tensor/test_dtensor.py::DTensorTest::test_dtensor_constructor, test/distributed/tensor/test_dtensor.py::DTensorTest::test_dtensor_properties, test/distributed/tensor/test_dtensor.py::DTensorTest::test_dtensor_spec_hash, test/distributed/tensor/test_dtensor.py::DTensorTest::test_dtensor_spec_read_only_after_set, test/distributed/tensor/test_dtensor.py::DTensorTest::test_dtensor_stride, test/distributed/tensor/test_dtensor.py::DTensorTest::test_from_local_negative_dim, test/distributed/tensor/test_dtensor.py::DTensorTest::test_from_local_uneven_sharding_raise_error, test/distributed/tensor/test_dtensor.py::DTensorTest::test_full_tensor_sync, test/distributed/tensor/test_dtensor.py::DTensorTest::test_to_local_grad_hint, test/distributed/tensor/test_dtensor.py::DTensorTestWithLocalTensor::test_dtensor_async_output, test/distributed/tensor/test_dtensor.py::DTensorTestWithLocalTensor::test_dtensor_new_empty_strided, test/distributed/tensor/test_dtensor.py::DTensorTestWithLocalTensor::test_dtensor_save_load_import, test/distributed/tensor/test_dtensor.py::DTensorTestWithLocalTensor::test_dtensor_spec_hash, test/distributed/tensor/test_dtensor.py::DTensorTestWithLocalTensor::test_dtensor_spec_read_only_after_set, test/distributed/tensor/test_dtensor.py::DTensorTestWithLocalTensor::test_from_local, test/distributed/tensor/test_dtensor.py::DTensorTestWithLocalTensor::test_shard_tensor, test/distributed/tensor/test_dtensor.py::DTensorTestWithLocalTensor::test_to_local, test/distributed/tensor/test_dtensor.py::DTensorMeshTest::test_as_strided_identity, test/distributed/tensor/test_dtensor.py::DTensorMeshTest::test_auto_implicit_replication, test/distributed/tensor/test_dtensor.py::DTensorMeshTest::test_dtensor_2d_mesh, test/distributed/tensor/test_dtensor.py::DTensorMeshTest::test_dtensor_spec_local_shard_offset, test/distributed/tensor/test_dtensor.py::DTensorMeshTest::test_vmap_embedding, test/distributed/tensor/test_dtensor.py::DTensorMeshTestWithLocalTensor::test_as_strided_identity, test/distributed/tensor/test_dtensor.py::DTensorMeshTestWithLocalTensor::test_auto_implicit_replication, test/distributed/tensor/test_dtensor.py::DTensorMeshTestWithLocalTensor::test_device_mesh_nd, test/distributed/tensor/test_dtensor.py::DTensorMeshTestWithLocalTensor::test_implicit_replication, test/distributed/tensor/test_dtensor.py::DTensorMeshTestWithLocalTensor::test_metadata_consistency_check, test/distributed/tensor/test_dtensor.py::DTensorMeshTestWithLocalTensor::test_redistribute_sub_mesh, test/distributed/tensor/test_dtensor.py::DTensorMeshTestWithLocalTensor::test_vmap_embedding, test/distributed/tensor/test_dtensor.py::TestDTensorPlacementTypes::test_split_tensor_1D, test/distributed/tensor/test_dtensor.py::TestDTensorSpecWithLocalTensor::test_default_shard_order, test/distributed/tensor/test_dtensor.py::TestDTensorSpecWithLocalTensor::test_dtensor_spec_default_shard_order_generation 2025-12-04T10:26:40.1973261Z 2025-12-04T10:26:40.1973692Z Finished distributed/tensor/test_dtensor 2/3 ... [2025-12-04 10:26:40.193825][4969029.043754479], took 1.37min 2025-12-04T10:26:40.1975349Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:26:40.1984890Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:26:40.1991414Z Running distributed/test_aten_comm_compute_reordering 2/3 ... [2025-12-04 10:26:40.198914][4969029.048846972] 2025-12-04T10:26:40.1992111Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:26:40.1996674Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/test_aten_comm_compute_reordering.py', '--shard-id=2', '--num-shards=3', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:26:40.199406] 2025-12-04T10:29:46.9561244Z 2025-12-04T10:29:46.9562710Z distributed/test_aten_comm_compute_reordering 2/3 was successful, full logs can be found in artifacts with path test/test-reports/distributed.test_aten_comm_compute_reordering_2.3_d75d76b2aee06ba6_.log 2025-12-04T10:29:46.9572435Z Running 14 items in this shard: test/distributed/test_aten_comm_compute_reordering.py::TestComputeCommReorderingMultiProc::test_overlap_scheduling_via_config, test/distributed/test_aten_comm_compute_reordering.py::TestComputeCommReorderingMultiProc::test_raise_comms, test/distributed/test_aten_comm_compute_reordering.py::TestComputeCommReorderingBucketing::test_basic_all_gather_bucketing, test/distributed/test_aten_comm_compute_reordering.py::TestComputeCommReorderingBucketing::test_bucketing_wait_sink, test/distributed/test_aten_comm_compute_reordering.py::TestComputeCommReorderingBucketing::test_bucketing_with_convert_dtype, test/distributed/test_aten_comm_compute_reordering.py::TestComputeCommReorderingBucketing::test_custom_estimation_with_fake_tensor_mode, test/distributed/test_aten_comm_compute_reordering.py::TestComputeCommReorderingBucketing::test_custom_estimator_for_non_compute_nodes, test/distributed/test_aten_comm_compute_reordering.py::TestComputeCommReorderingBucketing::test_no_bucketing_when_collective_depends_on_hiding_node, test/distributed/test_aten_comm_compute_reordering.py::TestComputeCommReorderingBucketing::test_no_bucketing_with_dependent_hiding_nodes, test/distributed/test_aten_comm_compute_reordering.py::TestComputeCommReorderingBucketing::test_reduce_scatter_bucketing, test/distributed/test_aten_comm_compute_reordering.py::TestComputeCommReorderingBucketing::test_schedulable_wait, test/distributed/test_aten_comm_compute_reordering.py::TestManualOverlapBucketing::test_bucketing_reordering_pass_single_bucket_custom_module_stack_fn, test/distributed/test_aten_comm_compute_reordering.py::TestManualOverlapBucketing::test_manual_reordering_bucketing_pass_separate_buckets, test/distributed/test_aten_comm_compute_reordering.py::TestManualOverlapBucketing::test_raise_comms 2025-12-04T10:29:46.9582274Z 2025-12-04T10:29:46.9582783Z Finished distributed/test_aten_comm_compute_reordering 2/3 ... [2025-12-04 10:29:46.955916][4969215.805848539], took 3.11min 2025-12-04T10:29:46.9584290Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:29:46.9601865Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:29:46.9608288Z Running distributed/tensor/test_dynamic 1/1 ... [2025-12-04 10:29:46.960578][4969215.810510942] 2025-12-04T10:29:46.9608941Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:29:46.9612707Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/tensor/test_dynamic.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:29:46.961029] 2025-12-04T10:30:24.9925405Z 2025-12-04T10:30:24.9927623Z distributed/tensor/test_dynamic 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.tensor.test_dynamic_1.1_4a1778b74a4f95cd_.log 2025-12-04T10:30:24.9930938Z Running 4 items in this shard: test/distributed/tensor/test_dynamic.py::TestDynamic::test_embedding_fake_tensor_cache_enabled_False, test/distributed/tensor/test_dynamic.py::TestDynamic::test_embedding_fake_tensor_cache_enabled_True, test/distributed/tensor/test_dynamic.py::TestDynamicWithLocalTensor::test_embedding_fake_tensor_cache_enabled_False, test/distributed/tensor/test_dynamic.py::TestDynamicWithLocalTensor::test_embedding_fake_tensor_cache_enabled_True 2025-12-04T10:30:24.9933170Z 2025-12-04T10:30:24.9933610Z Finished distributed/tensor/test_dynamic 1/1 ... [2025-12-04 10:30:24.992047][4969253.841979056], took 0.63min 2025-12-04T10:30:24.9938647Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:30:24.9967077Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:30:24.9971568Z Running distributed/checkpoint/e2e/test_fsdp_ep 1/1 ... [2025-12-04 10:30:24.996873][4969253.846806437] 2025-12-04T10:30:24.9972263Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:30:24.9976567Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/checkpoint/e2e/test_fsdp_ep.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:30:24.997400] 2025-12-04T10:30:29.6710016Z 2025-12-04T10:30:29.6711552Z distributed/checkpoint/e2e/test_fsdp_ep 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.checkpoint.e2e.test_fsdp_ep_1.1_c2a38e341e8d9097_.log 2025-12-04T10:30:29.6713069Z Running 1 items in this shard: test/distributed/checkpoint/e2e/test_fsdp_ep.py::TestFSDPWithEP::test_e2e 2025-12-04T10:30:29.6713640Z 2025-12-04T10:30:29.6714143Z Finished distributed/checkpoint/e2e/test_fsdp_ep 1/1 ... [2025-12-04 10:30:29.670680][4969258.520611852], took 0.08min 2025-12-04T10:30:29.6724187Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:30:29.6749802Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:30:29.6756268Z Running distributed/pipelining/test_unflatten 1/1 ... [2025-12-04 10:30:29.675420][4969258.525353405] 2025-12-04T10:30:29.6756977Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:30:29.6761089Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/pipelining/test_unflatten.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:30:29.675884] 2025-12-04T10:30:38.0067093Z 2025-12-04T10:30:38.0068487Z distributed/pipelining/test_unflatten 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.pipelining.test_unflatten_1.1_8be7b38074fc5b5f_.log 2025-12-04T10:30:38.0073076Z Running 1 items in this shard: test/distributed/pipelining/test_unflatten.py::UnflattenTestsCUDA::test_unflatten_cuda 2025-12-04T10:30:38.0073754Z 2025-12-04T10:30:38.0074235Z Finished distributed/pipelining/test_unflatten 1/1 ... [2025-12-04 10:30:38.006256][4969266.856188139], took 0.14min 2025-12-04T10:30:38.0080423Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:30:38.0108851Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:30:38.0112930Z Running distributed/tensor/test_dtensor_testbase 1/1 ... [2025-12-04 10:30:38.011025][4969266.860958572] 2025-12-04T10:30:38.0113646Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:30:38.0118636Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/tensor/test_dtensor_testbase.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:30:38.011488] 2025-12-04T10:30:44.3397248Z 2025-12-04T10:30:44.3399496Z distributed/tensor/test_dtensor_testbase 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.tensor.test_dtensor_testbase_1.1_70fcadccd2b25f52_.log 2025-12-04T10:30:44.3401324Z Running 1 items in this shard: test/distributed/tensor/test_dtensor_testbase.py::DTensorTestBaseUtilCPUTest::test_dtensor_testbase_destroy_pg 2025-12-04T10:30:44.3402057Z 2025-12-04T10:30:44.3402534Z Finished distributed/tensor/test_dtensor_testbase 1/1 ... [2025-12-04 10:30:44.339489][4969273.189416832], took 0.11min 2025-12-04T10:30:44.3413978Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:30:44.3442280Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:30:44.3449069Z Running distributed/tensor/test_redistribute 1/2 ... [2025-12-04 10:30:44.344625][4969273.194558179] 2025-12-04T10:30:44.3449817Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:30:44.3453509Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/tensor/test_redistribute.py', '--shard-id=1', '--num-shards=2', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:30:44.345113] 2025-12-04T10:32:15.7335706Z 2025-12-04T10:32:15.7337093Z distributed/tensor/test_redistribute 1/2 was successful, full logs can be found in artifacts with path test/test-reports/distributed.tensor.test_redistribute_1.2_36f32c8639f1473d_.log 2025-12-04T10:32:15.7353074Z Running 25 items in this shard: test/distributed/tensor/test_redistribute.py::RedistributeTest::test_partial_to_replicate_forward_backward_complex64, test/distributed/tensor/test_redistribute.py::RedistributeTest::test_partial_to_shard_float32, test/distributed/tensor/test_redistribute.py::RedistributeTest::test_redistribute_negative_shard_dim, test/distributed/tensor/test_redistribute.py::RedistributeTest::test_redistribute_shard_dim_change_complex64, test/distributed/tensor/test_redistribute.py::RedistributeTest::test_redistribute_shard_dim_change_float32, test/distributed/tensor/test_redistribute.py::RedistributeTest::test_redistribute_to_partial, test/distributed/tensor/test_redistribute.py::RedistributeTest::test_redistribute_uneven_sharding, test/distributed/tensor/test_redistribute.py::RedistributeTest::test_replicate_to_partial, test/distributed/tensor/test_redistribute.py::RedistributeTest::test_replicate_to_replicate_forward_backward, test/distributed/tensor/test_redistribute.py::RedistributeTest::test_shard_to_replicate_forward_backward_datatype_conversion, test/distributed/tensor/test_redistribute.py::RedistributeTest::test_shard_to_replicate_forward_backward_float32, test/distributed/tensor/test_redistribute.py::MultiDimRedistributeTest::test_multi_dim_mesh, test/distributed/tensor/test_redistribute.py::DistributeWithDeviceOrderTest::test_ordered_redistribute, test/distributed/tensor/test_redistribute.py::DistributeWithDeviceOrderTest::test_ordered_redistribute_for_special_placement, test/distributed/tensor/test_redistribute.py::RedistributeTestWithLocalTensor::test_partial_to_shard_float32, test/distributed/tensor/test_redistribute.py::RedistributeTestWithLocalTensor::test_redistribute_shard_dim_change_complex64, test/distributed/tensor/test_redistribute.py::RedistributeTestWithLocalTensor::test_redistribute_shard_dim_change_float32, test/distributed/tensor/test_redistribute.py::RedistributeTestWithLocalTensor::test_replicate_to_local_partial_grad_complex64, test/distributed/tensor/test_redistribute.py::RedistributeTestWithLocalTensor::test_replicate_to_local_partial_grad_float32, test/distributed/tensor/test_redistribute.py::RedistributeTestWithLocalTensor::test_replicate_to_shard_forward_backward, test/distributed/tensor/test_redistribute.py::RedistributeTestWithLocalTensor::test_shard_dim_alltoall_complex64, test/distributed/tensor/test_redistribute.py::RedistributeTestWithLocalTensor::test_shard_to_replicate_forward_backward_complex64, test/distributed/tensor/test_redistribute.py::MultiDimRedistributeTestWithLocalTensor::test_redistribute_shard_dim_multi_dim_mesh, test/distributed/tensor/test_redistribute.py::DistributeWithDeviceOrderTestWithLocalTensor::test_ordered_distribute_all_combination, test/distributed/tensor/test_redistribute.py::DistributeWithDeviceOrderTestWithLocalTensor::test_shard_order_same_data_as_strided_shard 2025-12-04T10:32:15.7367215Z 2025-12-04T10:32:15.7367677Z Finished distributed/tensor/test_redistribute 1/2 ... [2025-12-04 10:32:15.733353][4969364.583283435], took 1.52min 2025-12-04T10:32:15.7369191Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:32:15.7382799Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:32:15.7388877Z Running distributed/tensor/test_tensor_ops 2/4 ... [2025-12-04 10:32:15.738612][4969364.588545154] 2025-12-04T10:32:15.7389540Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:32:15.7394200Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/tensor/test_tensor_ops.py', '--shard-id=2', '--num-shards=4', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:32:15.739113] 2025-12-04T10:33:09.3056064Z 2025-12-04T10:33:09.3057698Z distributed/tensor/test_tensor_ops 2/4 was successful, full logs can be found in artifacts with path test/test-reports/distributed.tensor.test_tensor_ops_2.4_b9323146f88faefb_.log 2025-12-04T10:33:09.3066436Z Running 18 items in this shard: test/distributed/tensor/test_tensor_ops.py::DistTensorOpsTest::test_empty_like, test/distributed/tensor/test_tensor_ops.py::DistTensorOpsTest::test_fill_inplace, test/distributed/tensor/test_tensor_ops.py::DistTensorOpsTest::test_gather, test/distributed/tensor/test_tensor_ops.py::DistTensorOpsTest::test_index, test/distributed/tensor/test_tensor_ops.py::DistTensorOpsTest::test_scatter, test/distributed/tensor/test_tensor_ops.py::DistTensorOpsTest::test_split_on_partial, test/distributed/tensor/test_tensor_ops.py::DistTensorOpsTest::test_stack, test/distributed/tensor/test_tensor_ops.py::DistTensorOpsTest::test_zero_inplace, test/distributed/tensor/test_tensor_ops.py::DistTensorOpsTestWithLocalTensor::test_copy_, test/distributed/tensor/test_tensor_ops.py::DistTensorOpsTestWithLocalTensor::test_dtensor_dtype_conversion, test/distributed/tensor/test_tensor_ops.py::DistTensorOpsTestWithLocalTensor::test_ones_like, test/distributed/tensor/test_tensor_ops.py::DistTensorOpsTestWithLocalTensor::test_ones_like_partial_sum, test/distributed/tensor/test_tensor_ops.py::DistTensorOpsTestWithLocalTensor::test_op_out_variant, test/distributed/tensor/test_tensor_ops.py::DistTensorOpsTestWithLocalTensor::test_slice, test/distributed/tensor/test_tensor_ops.py::DistTensorOpsTestWithLocalTensor::test_split_on_partial, test/distributed/tensor/test_tensor_ops.py::DistTensorOpsTestWithLocalTensor::test_stack, test/distributed/tensor/test_tensor_ops.py::DistTensorOpsTestWithLocalTensor::test_unbind, test/distributed/tensor/test_tensor_ops.py::DistTensorOpsTestWithLocalTensor::test_zeros_like 2025-12-04T10:33:09.3074360Z 2025-12-04T10:33:09.3074802Z Finished distributed/tensor/test_tensor_ops 2/4 ... [2025-12-04 10:33:09.305270][4969418.155200733], took 0.89min 2025-12-04T10:33:09.3077151Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:33:09.3102458Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:33:09.3109541Z Running distributed/test_nvshmem 1/1 ... [2025-12-04 10:33:09.310622][4969418.160555433] 2025-12-04T10:33:09.3110193Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:33:09.3114269Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/test_nvshmem.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:33:09.311106] 2025-12-04T10:33:11.4806125Z 2025-12-04T10:33:11.4807375Z distributed/test_nvshmem 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.test_nvshmem_1.1_17bc04f674629a68_.log 2025-12-04T10:33:11.4833848Z Running 47 items in this shard: test/distributed/test_nvshmem.py::NVSHMEMSymmetricMemoryTest::test_alloc, test/distributed/test_nvshmem.py::NVSHMEMSymmetricMemoryTest::test_alloc_without_device_context, test/distributed/test_nvshmem.py::NVSHMEMSymmetricMemoryTest::test_get_remote_tensor, test/distributed/test_nvshmem.py::NVSHMEMSymmetricMemoryTest::test_get_remote_tensors, test/distributed/test_nvshmem.py::NVSHMEMSymmetricMemoryTest::test_handle_offset, test/distributed/test_nvshmem.py::NVSHMEMSymmetricMemoryTest::test_mempool_compute_ops, test/distributed/test_nvshmem.py::NVSHMEMSymmetricMemoryTest::test_mempool_tensor_factory, test/distributed/test_nvshmem.py::NVSHMEMSymmetricMemoryTest::test_mempool_tensor_w_collective, test/distributed/test_nvshmem.py::NVSHMEMSymmetricMemoryTest::test_nvshmem_get, test/distributed/test_nvshmem.py::NVSHMEMSymmetricMemoryTest::test_nvshmem_put, test/distributed/test_nvshmem.py::NVSHMEMAll2AllTest::test_all_to_all_vdev, test/distributed/test_nvshmem.py::NVSHMEMAll2AllTest::test_all_to_all_vdev_2d_align_1, test/distributed/test_nvshmem.py::NVSHMEMAll2AllTest::test_all_to_all_vdev_2d_align_16, test/distributed/test_nvshmem.py::NVSHMEMAll2AllTest::test_all_to_all_vdev_2d_align_8, test/distributed/test_nvshmem.py::NVSHMEMAll2AllTest::test_all_to_all_vdev_2d_offset, test/distributed/test_nvshmem.py::NVSHMEMAll2AllTest::test_nvshmem_all_to_all, test/distributed/test_nvshmem.py::DispatchCombineTest::test_dispatch_combine_align_1, test/distributed/test_nvshmem.py::DispatchCombineTest::test_dispatch_combine_align_16, test/distributed/test_nvshmem.py::DispatchCombineTest::test_dispatch_combine_align_8, test/distributed/test_nvshmem.py::DispatchCombineInSubgroups::test_dispatch_combine_subgroup, test/distributed/test_nvshmem.py::NVSHMEMTileCommTest::test_multi_root_tile_reduce_tile_size_128_root_ratio_1_bfloat16, test/distributed/test_nvshmem.py::NVSHMEMTileCommTest::test_multi_root_tile_reduce_tile_size_128_root_ratio_1_float16, test/distributed/test_nvshmem.py::NVSHMEMTileCommTest::test_multi_root_tile_reduce_tile_size_128_root_ratio_1_float32, test/distributed/test_nvshmem.py::NVSHMEMTileCommTest::test_multi_root_tile_reduce_tile_size_128_root_ratio_2_bfloat16, test/distributed/test_nvshmem.py::NVSHMEMTileCommTest::test_multi_root_tile_reduce_tile_size_128_root_ratio_2_float16, test/distributed/test_nvshmem.py::NVSHMEMTileCommTest::test_multi_root_tile_reduce_tile_size_128_root_ratio_2_float32, test/distributed/test_nvshmem.py::NVSHMEMTileCommTest::test_multi_root_tile_reduce_tile_size_32_root_ratio_1_bfloat16, test/distributed/test_nvshmem.py::NVSHMEMTileCommTest::test_multi_root_tile_reduce_tile_size_32_root_ratio_1_float16, test/distributed/test_nvshmem.py::NVSHMEMTileCommTest::test_multi_root_tile_reduce_tile_size_32_root_ratio_1_float32, test/distributed/test_nvshmem.py::NVSHMEMTileCommTest::test_multi_root_tile_reduce_tile_size_32_root_ratio_2_bfloat16, test/distributed/test_nvshmem.py::NVSHMEMTileCommTest::test_multi_root_tile_reduce_tile_size_32_root_ratio_2_float16, test/distributed/test_nvshmem.py::NVSHMEMTileCommTest::test_multi_root_tile_reduce_tile_size_32_root_ratio_2_float32, test/distributed/test_nvshmem.py::NVSHMEMTileCommTest::test_multi_root_tile_reduce_tile_size_512_root_ratio_1_bfloat16, test/distributed/test_nvshmem.py::NVSHMEMTileCommTest::test_multi_root_tile_reduce_tile_size_512_root_ratio_1_float16, test/distributed/test_nvshmem.py::NVSHMEMTileCommTest::test_multi_root_tile_reduce_tile_size_512_root_ratio_1_float32, test/distributed/test_nvshmem.py::NVSHMEMTileCommTest::test_multi_root_tile_reduce_tile_size_512_root_ratio_2_bfloat16, test/distributed/test_nvshmem.py::NVSHMEMTileCommTest::test_multi_root_tile_reduce_tile_size_512_root_ratio_2_float16, test/distributed/test_nvshmem.py::NVSHMEMTileCommTest::test_multi_root_tile_reduce_tile_size_512_root_ratio_2_float32, test/distributed/test_nvshmem.py::NVSHMEMTileCommTest::test_tile_reduce_tile_size_128_bfloat16, test/distributed/test_nvshmem.py::NVSHMEMTileCommTest::test_tile_reduce_tile_size_128_float16, test/distributed/test_nvshmem.py::NVSHMEMTileCommTest::test_tile_reduce_tile_size_128_float32, test/distributed/test_nvshmem.py::NVSHMEMTileCommTest::test_tile_reduce_tile_size_32_bfloat16, test/distributed/test_nvshmem.py::NVSHMEMTileCommTest::test_tile_reduce_tile_size_32_float16, test/distributed/test_nvshmem.py::NVSHMEMTileCommTest::test_tile_reduce_tile_size_32_float32, test/distributed/test_nvshmem.py::NVSHMEMTileCommTest::test_tile_reduce_tile_size_512_bfloat16, test/distributed/test_nvshmem.py::NVSHMEMTileCommTest::test_tile_reduce_tile_size_512_float16, test/distributed/test_nvshmem.py::NVSHMEMTileCommTest::test_tile_reduce_tile_size_512_float32 2025-12-04T10:33:11.4855243Z 2025-12-04T10:33:11.4855644Z Finished distributed/test_nvshmem 1/1 ... [2025-12-04 10:33:11.480195][4969420.330126137], took 0.04min 2025-12-04T10:33:11.4857063Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:33:11.4858346Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:33:11.4859145Z Running distributed/tensor/test_attention 1/1 ... [2025-12-04 10:33:11.485533][4969420.335466177] 2025-12-04T10:33:11.4859792Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:33:11.4863863Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/tensor/test_attention.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:33:11.486018] 2025-12-04T10:35:25.1498433Z 2025-12-04T10:35:25.1499989Z distributed/tensor/test_attention 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.tensor.test_attention_1.1_476d43cc1d3efa81_.log 2025-12-04T10:35:25.1508602Z Running 14 items in this shard: test/distributed/tensor/test_attention.py::RingAttentionTest::test_is_causal_behavior, test/distributed/tensor/test_attention.py::RingAttentionTest::test_ring_attention_sdpa, test/distributed/tensor/test_attention.py::CPFlexAttentionTest::test_cp_flex_attention_causal_mask, test/distributed/tensor/test_attention.py::CPFlexAttentionTest::test_cp_flex_attention_document_mask, test/distributed/tensor/test_attention.py::TestCPCustomOps::test_flex_cp_custom_op, test/distributed/tensor/test_attention.py::TestSharding::test_attention_shard_without_cp, test/distributed/tensor/test_attention.py::TestSharding::test_context_parallel_shard, test/distributed/tensor/test_attention.py::RingAttentionTestWithLocalTensor::test_is_causal_behavior, test/distributed/tensor/test_attention.py::RingAttentionTestWithLocalTensor::test_ring_attention_sdpa, test/distributed/tensor/test_attention.py::CPFlexAttentionTestWithLocalTensor::test_cp_flex_attention_causal_mask, test/distributed/tensor/test_attention.py::CPFlexAttentionTestWithLocalTensor::test_cp_flex_attention_document_mask, test/distributed/tensor/test_attention.py::TestCPCustomOpsWithLocalTensor::test_flex_cp_custom_op, test/distributed/tensor/test_attention.py::TestShardingWithLocalTensor::test_attention_shard_without_cp, test/distributed/tensor/test_attention.py::TestShardingWithLocalTensor::test_context_parallel_shard 2025-12-04T10:35:25.1515556Z 2025-12-04T10:35:25.1515983Z Finished distributed/tensor/test_attention 1/1 ... [2025-12-04 10:35:25.149460][4969553.999392204], took 2.23min 2025-12-04T10:35:25.1517439Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:35:25.1541819Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:35:25.1546266Z Running distributed/test_device_mesh 2/2 ... [2025-12-04 10:35:25.154310][4969554.004243626] 2025-12-04T10:35:25.1546939Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:35:25.1550234Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/test_device_mesh.py', '--shard-id=2', '--num-shards=2', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:35:25.154761] 2025-12-04T10:36:55.1852973Z 2025-12-04T10:36:55.1854507Z distributed/test_device_mesh 2/2 was successful, full logs can be found in artifacts with path test/test-reports/distributed.test_device_mesh_2.2_2aceaf20713d5efb_.log 2025-12-04T10:36:55.1869711Z Running 32 items in this shard: test/distributed/test_device_mesh.py::DeviceMeshTestGlooBackend::test_device_mesh_reuse_default_group, test/distributed/test_device_mesh.py::DeviceMeshSetDeviceTest::test_auto_set_device_from_local_rank, test/distributed/test_device_mesh.py::DeviceMeshTest::test_from_group_with_global_pg, test/distributed/test_device_mesh.py::DeviceMeshTest::test_raises_invalid_device_type, test/distributed/test_device_mesh.py::DeviceMeshTestNDim::test_device_mesh_hash, test/distributed/test_device_mesh.py::DeviceMeshTestNDim::test_device_mesh_nd, test/distributed/test_device_mesh.py::DeviceMeshTestNDim::test_device_mesh_parent_child_hash, test/distributed/test_device_mesh.py::DeviceMeshTestNDim::test_from_group_with_mesh_shape_2d, test/distributed/test_device_mesh.py::DeviceMeshTestNDim::test_from_group_with_mesh_shape_3d, test/distributed/test_device_mesh.py::DeviceMeshTestNDim::test_get_local_rank_3d, test/distributed/test_device_mesh.py::InitDeviceMeshTest::test_backend_override_argument_dict_with_idx_and_backend_lazy, test/distributed/test_device_mesh.py::InitDeviceMeshTest::test_backend_override_argument_dict_with_name_and_options, test/distributed/test_device_mesh.py::InitDeviceMeshTest::test_backend_override_argument_errors, test/distributed/test_device_mesh.py::InitDeviceMeshTest::test_init_device_mesh, test/distributed/test_device_mesh.py::InitDeviceMeshTest::test_raises_duplicate_mesh_dim_names, test/distributed/test_device_mesh.py::InitDeviceMeshTest::test_raises_mesh_shape_mesh_dim_names_mismatch, test/distributed/test_device_mesh.py::TestDeviceMeshGetItem::test_cache_and_reuse_submesh_slice_result, test/distributed/test_device_mesh.py::TestDeviceMeshGetItem::test_concatenate_2d, test/distributed/test_device_mesh.py::TestDeviceMeshGetItem::test_flatten_mesh_3d, test/distributed/test_device_mesh.py::TestDeviceMeshGetItem::test_get_item_2d, test/distributed/test_device_mesh.py::TestDeviceMeshGetItem::test_get_item_3d, test/distributed/test_device_mesh.py::TestDeviceMeshGetItem::test_raises_invalid_mesh_dim_name, test/distributed/test_device_mesh.py::TestDeviceMeshGetItem::test_raises_no_mesh_dim_found, test/distributed/test_device_mesh.py::TestMeshEnv::test_get_all_submeshes, test/distributed/test_device_mesh.py::TestMeshEnv::test_get_root_mesh_dim_exist, test/distributed/test_device_mesh.py::TestMeshEnv::test_get_root_mesh_dim_not_exist, test/distributed/test_device_mesh.py::DeviceMeshCollectiveTest::test_broadcast_nd, test/distributed/test_device_mesh.py::DeviceMeshCollectiveTest::test_reduce_scatter_contiguous, test/distributed/test_device_mesh.py::DeviceMeshCollectiveTest::test_reduce_scatter_uneven, test/distributed/test_device_mesh.py::DeviceMeshCollectiveTest::test_scatter_1d, test/distributed/test_device_mesh.py::CuTeLayoutTest::test_check_non_overlap, test/distributed/test_device_mesh.py::CuTeLayoutTest::test_composition 2025-12-04T10:36:55.1883222Z 2025-12-04T10:36:55.1883578Z Finished distributed/test_device_mesh 2/2 ... [2025-12-04 10:36:55.185572][4969644.035503824], took 1.50min 2025-12-04T10:36:55.1885004Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:36:55.1901406Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:36:55.1910392Z Running distributed/tensor/test_dtensor_ops 1/1 ... [2025-12-04 10:36:55.190609][4969644.040542006] 2025-12-04T10:36:55.1911139Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:36:55.1914059Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/tensor/test_dtensor_ops.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:36:55.191079] 2025-12-04T10:36:58.1565055Z 2025-12-04T10:36:58.1566569Z distributed/tensor/test_dtensor_ops 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.tensor.test_dtensor_ops_1.1_c6b83e8d4d86772b_.log 2025-12-04T10:36:58.1567691Z Running 0 items in this shard: 2025-12-04T10:36:58.1567944Z 2025-12-04T10:36:58.1568394Z Finished distributed/tensor/test_dtensor_ops 1/1 ... [2025-12-04 10:36:58.156187][4969647.006119118], took 0.05min 2025-12-04T10:36:58.1581707Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:36:58.1606278Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:36:58.1612823Z Running distributed/checkpoint/test_fsspec 1/1 ... [2025-12-04 10:36:58.161121][4969647.011052921] 2025-12-04T10:36:58.1613495Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:36:58.1618080Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/checkpoint/test_fsspec.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:36:58.161594] 2025-12-04T10:37:17.6626528Z 2025-12-04T10:37:17.6628052Z distributed/checkpoint/test_fsspec 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.checkpoint.test_fsspec_1.1_723240ac8f9bd957_.log 2025-12-04T10:37:17.6630253Z Running 3 items in this shard: test/distributed/checkpoint/test_fsspec.py::TestFSSpec::test_fsspec, test/distributed/checkpoint/test_fsspec.py::TestFSSpec::test_overwrite, test/distributed/checkpoint/test_fsspec.py::TestFileSystem::test_remove_on_fail 2025-12-04T10:37:17.6631692Z 2025-12-04T10:37:17.6632163Z Finished distributed/checkpoint/test_fsspec 1/1 ... [2025-12-04 10:37:17.662416][4969666.512346598], took 0.33min 2025-12-04T10:37:17.6644970Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:37:17.6672003Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:37:17.6678996Z Running distributed/tensor/experimental/test_tp_transform 1/1 ... [2025-12-04 10:37:17.667714][4969666.517647995] 2025-12-04T10:37:17.6679751Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:37:17.6684311Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/tensor/experimental/test_tp_transform.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:37:17.668185] 2025-12-04T10:37:47.3381116Z 2025-12-04T10:37:47.3383702Z distributed/tensor/experimental/test_tp_transform 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.tensor.experimental.test_tp_transform_1.1_df3716f4385256b1_.log 2025-12-04T10:37:47.3386474Z Running 3 items in this shard: test/distributed/tensor/experimental/test_tp_transform.py::TensorParallelTest::test_tp_transform_e2e, test/distributed/tensor/experimental/test_tp_transform.py::TensorParallelTest::test_tp_transform_no_bias, test/distributed/tensor/experimental/test_tp_transform.py::TensorParallelTest::test_tp_transform_with_uncovered_op 2025-12-04T10:37:47.3388409Z 2025-12-04T10:37:47.3388952Z Finished distributed/tensor/experimental/test_tp_transform 1/1 ... [2025-12-04 10:37:47.337646][4969696.187577192], took 0.49min 2025-12-04T10:37:47.3398870Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:37:47.3429322Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:37:47.3434254Z Running distributed/_composable/test_checkpoint 1/1 ... [2025-12-04 10:37:47.343165][4969696.193097097] 2025-12-04T10:37:47.3434987Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:37:47.3438786Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/_composable/test_checkpoint.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:37:47.343659] 2025-12-04T10:37:55.6362940Z 2025-12-04T10:37:55.6364523Z distributed/_composable/test_checkpoint 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed._composable.test_checkpoint_1.1_3eb547aa494750e9_.log 2025-12-04T10:37:55.6368093Z Running 6 items in this shard: test/distributed/_composable/test_checkpoint.py::TestCheckpoint::test_checkpoint_kwargs, test/distributed/_composable/test_checkpoint.py::TestCheckpoint::test_clears_state_on_error_in_forward, test/distributed/_composable/test_checkpoint.py::TestCheckpoint::test_multi_args, test/distributed/_composable/test_checkpoint.py::TestCheckpoint::test_random_cpu, test/distributed/_composable/test_checkpoint.py::TestCheckpoint::test_tensor_only_cpu, test/distributed/_composable/test_checkpoint.py::TestCheckpoint::test_tensor_only_gpu 2025-12-04T10:37:55.6370792Z 2025-12-04T10:37:55.6371260Z Finished distributed/_composable/test_checkpoint 1/1 ... [2025-12-04 10:37:55.635782][4969704.485718847], took 0.14min 2025-12-04T10:37:55.6372759Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:37:55.6379949Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:37:55.6380834Z Running distributed/_tools/test_fsdp2_mem_tracker 1/1 ... [2025-12-04 10:37:55.637847][4969704.487784526] 2025-12-04T10:37:55.6381518Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:37:55.6382878Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/_tools/test_fsdp2_mem_tracker.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:37:55.638047] 2025-12-04T10:38:43.6272299Z 2025-12-04T10:38:43.6273913Z distributed/_tools/test_fsdp2_mem_tracker 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed._tools.test_fsdp2_mem_tracker_1.1_bde84747f0b6c8bb_.log 2025-12-04T10:38:43.6277840Z Running 3 items in this shard: test/distributed/_tools/test_fsdp2_mem_tracker.py::TestTrackerFullyShard1DTrainingCore::test_tracker_multi_group_eager, test/distributed/_tools/test_fsdp2_mem_tracker.py::TestTrackerFullyShard1DTrainingCore::test_tracker_non_root_forward_backward, test/distributed/_tools/test_fsdp2_mem_tracker.py::TestTrackerFullyShard1DTrainingCompose::test_tracker_with_activation_checkpointing 2025-12-04T10:38:43.6279875Z 2025-12-04T10:38:43.6280344Z Finished distributed/_tools/test_fsdp2_mem_tracker 1/1 ... [2025-12-04 10:38:43.626842][4969752.476774568], took 0.80min 2025-12-04T10:38:43.6292653Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:38:43.6317442Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:38:43.6323167Z Running distributed/tensor/test_embedding_ops 1/1 ... [2025-12-04 10:38:43.632080][4969752.482013499] 2025-12-04T10:38:43.6323863Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:38:43.6327686Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/tensor/test_embedding_ops.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:38:43.632539] 2025-12-04T10:39:14.0611309Z 2025-12-04T10:39:14.0612182Z distributed/tensor/test_embedding_ops 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.tensor.test_embedding_ops_1.1_949ce47316a7b802_.log 2025-12-04T10:39:14.0616569Z Running 8 items in this shard: test/distributed/tensor/test_embedding_ops.py::TestEmbeddingOp::test_multiple_embeddings_rowwise, test/distributed/tensor/test_embedding_ops.py::TestEmbeddingOp::test_sharded_embedding_colwise, test/distributed/tensor/test_embedding_ops.py::TestEmbeddingOp::test_sharded_embedding_colwise_max_norm_errors, test/distributed/tensor/test_embedding_ops.py::TestEmbeddingOp::test_sharded_embedding_rowwise, test/distributed/tensor/test_embedding_ops.py::TestEmbeddingOpWithLocalTensor::test_multiple_embeddings_rowwise, test/distributed/tensor/test_embedding_ops.py::TestEmbeddingOpWithLocalTensor::test_sharded_embedding_colwise, test/distributed/tensor/test_embedding_ops.py::TestEmbeddingOpWithLocalTensor::test_sharded_embedding_colwise_max_norm_errors, test/distributed/tensor/test_embedding_ops.py::TestEmbeddingOpWithLocalTensor::test_sharded_embedding_rowwise 2025-12-04T10:39:14.0621496Z 2025-12-04T10:39:14.0621972Z Finished distributed/tensor/test_embedding_ops 1/1 ... [2025-12-04 10:39:14.060775][4969782.910705253], took 0.51min 2025-12-04T10:39:14.0631644Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:39:14.0658768Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:39:14.0665595Z Running distributed/checkpoint/test_fsdp_optim_state 1/1 ... [2025-12-04 10:39:14.066306][4969782.91623959] 2025-12-04T10:39:14.0666360Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:39:14.0670170Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/checkpoint/test_fsdp_optim_state.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:39:14.066782] 2025-12-04T10:39:38.7710869Z 2025-12-04T10:39:38.7712556Z distributed/checkpoint/test_fsdp_optim_state 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.checkpoint.test_fsdp_optim_state_1.1_0d7fcaac69422ad7_.log 2025-12-04T10:39:38.7715171Z Running 2 items in this shard: test/distributed/checkpoint/test_fsdp_optim_state.py::FsdpOptimStateCheckpoint::test_load_sharded_optimizer_state_dict_pass_planner_False, test/distributed/checkpoint/test_fsdp_optim_state.py::FsdpOptimStateCheckpoint::test_load_sharded_optimizer_state_dict_pass_planner_True 2025-12-04T10:39:38.7716758Z 2025-12-04T10:39:38.7718061Z Finished distributed/checkpoint/test_fsdp_optim_state 1/1 ... [2025-12-04 10:39:38.770982][4969807.620910466], took 0.41min 2025-12-04T10:39:38.7734861Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:39:38.7761509Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:39:38.7769184Z Running distributed/checkpoint/e2e/test_e2e_save_and_load 1/1 ... [2025-12-04 10:39:38.776541][4969807.626472313] 2025-12-04T10:39:38.7769948Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:39:38.7773065Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/checkpoint/e2e/test_e2e_save_and_load.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:39:38.777019] 2025-12-04T10:43:04.3764036Z 2025-12-04T10:43:04.3765805Z distributed/checkpoint/e2e/test_e2e_save_and_load 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.checkpoint.e2e.test_e2e_save_and_load_1.1_0aae14d3ec97121c_.log 2025-12-04T10:43:04.3777795Z Running 19 items in this shard: test/distributed/checkpoint/e2e/test_e2e_save_and_load.py::TestE2ESaveAndLoad::test_different_ordered_state_dict_keys, test/distributed/checkpoint/e2e/test_e2e_save_and_load.py::TestE2ESaveAndLoad::test_e2e_async_cached_cache_staged_state_dict_False_async_checkpointer_type0_zoc_False, test/distributed/checkpoint/e2e/test_e2e_save_and_load.py::TestE2ESaveAndLoad::test_e2e_async_cached_cache_staged_state_dict_False_async_checkpointer_type2_zoc_False, test/distributed/checkpoint/e2e/test_e2e_save_and_load.py::TestE2ESaveAndLoad::test_e2e_async_cached_cache_staged_state_dict_False_async_checkpointer_type4_zoc_True, test/distributed/checkpoint/e2e/test_e2e_save_and_load.py::TestE2ESaveAndLoad::test_e2e_async_cached_cache_staged_state_dict_False_async_checkpointer_type5_zoc_True, test/distributed/checkpoint/e2e/test_e2e_save_and_load.py::TestE2ESaveAndLoad::test_e2e_async_cached_cache_staged_state_dict_True_async_checkpointer_type1_zoc_False, test/distributed/checkpoint/e2e/test_e2e_save_and_load.py::TestE2ESaveAndLoad::test_e2e_async_cached_cache_staged_state_dict_True_async_checkpointer_type3_zoc_False, test/distributed/checkpoint/e2e/test_e2e_save_and_load.py::TestE2ESaveAndLoad::test_e2e_compile_False_model_type0, test/distributed/checkpoint/e2e/test_e2e_save_and_load.py::TestE2ESaveAndLoad::test_e2e_compile_False_model_type1, test/distributed/checkpoint/e2e/test_e2e_save_and_load.py::TestE2ESaveAndLoad::test_e2e_compile_False_model_type2, test/distributed/checkpoint/e2e/test_e2e_save_and_load.py::TestE2ESaveAndLoad::test_e2e_compile_True_model_type0, test/distributed/checkpoint/e2e/test_e2e_save_and_load.py::TestE2ESaveAndLoad::test_e2e_compile_True_model_type1, test/distributed/checkpoint/e2e/test_e2e_save_and_load.py::TestE2ESaveAndLoad::test_e2e_compile_True_model_type2, test/distributed/checkpoint/e2e/test_e2e_save_and_load.py::TestE2ESaveAndLoad::test_no_dist, test/distributed/checkpoint/e2e/test_e2e_save_and_load.py::TestE2ESaveAndLoad::test_overwrite, test/distributed/checkpoint/e2e/test_e2e_save_and_load.py::TestE2ESaveAndLoad::test_partial_load, test/distributed/checkpoint/e2e/test_e2e_save_and_load.py::TestE2ESaveAndLoad::test_stateful_and_non_stateful_loads, test/distributed/checkpoint/e2e/test_e2e_save_and_load.py::TestNoCPU::test_no_cpu, test/distributed/checkpoint/e2e/test_e2e_save_and_load.py::TestInitStateDict::test_init_state_dict 2025-12-04T10:43:04.3789662Z 2025-12-04T10:43:04.3790174Z Finished distributed/checkpoint/e2e/test_e2e_save_and_load 1/1 ... [2025-12-04 10:43:04.376347][4970013.226279295], took 3.43min 2025-12-04T10:43:04.3791760Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:43:04.3812111Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:43:04.3812896Z GITHUB_RUN_ID, GITHUB_RUN_ATTEMPT, or ARTIFACTS_FILE_SUFFIX not set, not uploading 2025-12-04T10:43:04.3813495Z Uploading artifacts took 0.00 seconds 2025-12-04T10:43:04.3817671Z Running distributed/checkpoint/test_dtensor_resharding 1/1 ... [2025-12-04 10:43:04.381546][4970013.231480541] 2025-12-04T10:43:04.3818410Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:43:04.3822274Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/checkpoint/test_dtensor_resharding.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:43:04.381985] 2025-12-04T10:44:39.7785107Z 2025-12-04T10:44:39.7786801Z distributed/checkpoint/test_dtensor_resharding 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.checkpoint.test_dtensor_resharding_1.1_0b616e0bd1f6acbf_.log 2025-12-04T10:44:39.7794572Z Running 10 items in this shard: test/distributed/checkpoint/test_dtensor_resharding.py::TestDTensorReshardPlacementChange::test_1d_to_1d_reshard_placement_change_extensions0, test/distributed/checkpoint/test_dtensor_resharding.py::TestDTensorReshardPlacementChange::test_1d_to_1d_reshard_placement_change_extensions1, test/distributed/checkpoint/test_dtensor_resharding.py::TestDTensorReshardPlacementChange::test_1d_to_1d_reshard_placement_change_extensions2, test/distributed/checkpoint/test_dtensor_resharding.py::TestDTensorReshardPlacementChange::test_2d_to_2d_reshard_placement_change, test/distributed/checkpoint/test_dtensor_resharding.py::TestDTensorReshardMeshChange::test_1d_to_2d_reshard_mesh_change, test/distributed/checkpoint/test_dtensor_resharding.py::TestDTensorReshardMeshChange::test_2d_to_1d_reshard_mesh_change, test/distributed/checkpoint/test_dtensor_resharding.py::TestDTensorReshardMeshChange::test_dtensor_checkpoint_resharding_with_empty_shard, test/distributed/checkpoint/test_dtensor_resharding.py::TestDTensorReshardMeshChange::test_dtensor_checkpoint_with_uneven_shards, test/distributed/checkpoint/test_dtensor_resharding.py::TestCheckpointableReshard::test_uneven_reshard_with_checkpointable_api, test/distributed/checkpoint/test_dtensor_resharding.py::TestCheckpointableReshard::test_uneven_reshard_with_dtensor_shards_wrapper_api 2025-12-04T10:44:39.7802246Z 2025-12-04T10:44:39.7802776Z Finished distributed/checkpoint/test_dtensor_resharding 1/1 ... [2025-12-04 10:44:39.778269][4970108.628198872], took 1.59min 2025-12-04T10:44:39.7809338Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:44:39.7835657Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:44:39.7843013Z Running distributed/_composable/test_replicate_with_compiler 1/1 ... [2025-12-04 10:44:39.784033][4970108.63396553] 2025-12-04T10:44:39.7843777Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:44:39.7848685Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/_composable/test_replicate_with_compiler.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:44:39.784540] 2025-12-04T10:46:53.7353819Z 2025-12-04T10:46:53.7355313Z distributed/_composable/test_replicate_with_compiler 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed._composable.test_replicate_with_compiler_1.1_6d90d633fee36ada_.log 2025-12-04T10:46:53.7361847Z Running 10 items in this shard: test/distributed/_composable/test_replicate_with_compiler.py::ReplicateTest::test_bucketing_coalesced_op, test/distributed/_composable/test_replicate_with_compiler.py::ReplicateTest::test_bucketing_concat_op, test/distributed/_composable/test_replicate_with_compiler.py::ReplicateTest::test_compile_backward_only, test/distributed/_composable/test_replicate_with_compiler.py::ReplicateTest::test_compile_bf16, test/distributed/_composable/test_replicate_with_compiler.py::ReplicateTest::test_compile_cpu, test/distributed/_composable/test_replicate_with_compiler.py::ReplicateTest::test_compile_cpu_no_sync, test/distributed/_composable/test_replicate_with_compiler.py::ReplicateTest::test_compile_fp16, test/distributed/_composable/test_replicate_with_compiler.py::ReplicateTest::test_compile_gpu, test/distributed/_composable/test_replicate_with_compiler.py::ReplicateTest::test_compile_gpu_ac, test/distributed/_composable/test_replicate_with_compiler.py::DDP_TP_Test::test_ddp_tp 2025-12-04T10:46:53.7366427Z 2025-12-04T10:46:53.7366958Z Finished distributed/_composable/test_replicate_with_compiler 1/1 ... [2025-12-04 10:46:53.734963][4970242.584894021], took 2.23min 2025-12-04T10:46:53.7374686Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:46:53.7402702Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:46:53.7409312Z Running distributed/_composable/fsdp/test_fully_shard_autograd 1/1 ... [2025-12-04 10:46:53.740634][4970242.590567042] 2025-12-04T10:46:53.7410093Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:46:53.7413781Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/_composable/fsdp/test_fully_shard_autograd.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:46:53.741118] 2025-12-04T10:47:48.7628129Z 2025-12-04T10:47:48.7629753Z distributed/_composable/fsdp/test_fully_shard_autograd 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed._composable.fsdp.test_fully_shard_autograd_1.1_4976d1b00243e16a_.log 2025-12-04T10:47:48.7634047Z Running 5 items in this shard: test/distributed/_composable/fsdp/test_fully_shard_autograd.py::TestFullyShardAutograd::test_nontensor_activations, test/distributed/_composable/fsdp/test_fully_shard_autograd.py::TestFullyShardAutograd::test_unused_forward_module, test/distributed/_composable/fsdp/test_fully_shard_autograd.py::TestFullyShardAutograd::test_unused_forward_output, test/distributed/_composable/fsdp/test_fully_shard_autograd.py::TestFullyShardPostAccGradHookMultiThread::test_post_acc_grad_hook_runs, test/distributed/_composable/fsdp/test_fully_shard_autograd.py::TestFullyShardPostAccGradHookMultiProcess::test_post_acc_grad_hook_optim_parity 2025-12-04T10:47:48.7637219Z 2025-12-04T10:47:48.7637769Z Finished distributed/_composable/fsdp/test_fully_shard_autograd 1/1 ... [2025-12-04 10:47:48.762467][4970297.612396979], took 0.92min 2025-12-04T10:47:48.7651906Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:47:48.7678444Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:47:48.7685772Z Running distributed/_composable/fsdp/test_fully_shard_compile 1/1 ... [2025-12-04 10:47:48.768214][4970297.618147179] 2025-12-04T10:47:48.7686551Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:47:48.7689815Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/_composable/fsdp/test_fully_shard_compile.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:47:48.768700] 2025-12-04T10:52:58.7807441Z 2025-12-04T10:52:58.7808965Z distributed/_composable/fsdp/test_fully_shard_compile 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed._composable.fsdp.test_fully_shard_compile_1.1_129fcf86b522fc54_.log 2025-12-04T10:52:58.7822487Z Running 18 items in this shard: test/distributed/_composable/fsdp/test_fully_shard_compile.py::TestFullyShardCompileCompute::test_disable_compiling_hooks, test/distributed/_composable/fsdp/test_fully_shard_compile.py::TestFullyShardCompile::test_compiled_autograd_ctx, test/distributed/_composable/fsdp/test_fully_shard_compile.py::TestFullyShardCompile::test_dynamo_recompiles_on_fsdp_layers, test/distributed/_composable/fsdp/test_fully_shard_compile.py::TestFullyShardCompile::test_dynamo_trace_use_training_state, test/distributed/_composable/fsdp/test_fully_shard_compile.py::TestFullyShardCompile::test_nested_fully_shard_backend_aot_eager, test/distributed/_composable/fsdp/test_fully_shard_compile.py::TestFullyShardCompile::test_nested_fully_shard_backend_aot_eager_decomp_partition, test/distributed/_composable/fsdp/test_fully_shard_compile.py::TestFullyShardCompile::test_nested_fully_shard_backend_inductor_fullgraph_False, test/distributed/_composable/fsdp/test_fully_shard_compile.py::TestFullyShardCompile::test_nested_fully_shard_backend_inductor_fullgraph_True, test/distributed/_composable/fsdp/test_fully_shard_compile.py::TestFullyShardCompile::test_nested_fully_shard_backend_inductor_fullgraph_True_graph_partition, test/distributed/_composable/fsdp/test_fully_shard_compile.py::TestFullyShardCompile::test_simple_mlp_fullgraph_backend_aot_eager, test/distributed/_composable/fsdp/test_fully_shard_compile.py::TestFullyShardCompile::test_simple_mlp_fullgraph_backend_aot_eager_decomp_partition, test/distributed/_composable/fsdp/test_fully_shard_compile.py::TestFullyShardCompile::test_simple_mlp_fullgraph_backend_inductor, test/distributed/_composable/fsdp/test_fully_shard_compile.py::TestFullyShardCompile::test_trace_fsdp_copy_, test/distributed/_composable/fsdp/test_fully_shard_compile.py::TestFullyShardCompile::test_transformer_backend_aot_eager, test/distributed/_composable/fsdp/test_fully_shard_compile.py::TestFullyShardCompile::test_transformer_backend_aot_eager_decomp_partition, test/distributed/_composable/fsdp/test_fully_shard_compile.py::TestFullyShardCompile::test_transformer_backend_inductor_fullgraph_False, test/distributed/_composable/fsdp/test_fully_shard_compile.py::TestFullyShardCompile::test_transformer_backend_inductor_fullgraph_True, test/distributed/_composable/fsdp/test_fully_shard_compile.py::TestFullyShardCompile::test_transformer_backend_inductor_fullgraph_True_graph_partition 2025-12-04T10:52:58.7834220Z 2025-12-04T10:52:58.7834751Z Finished distributed/_composable/fsdp/test_fully_shard_compile 1/1 ... [2025-12-04 10:52:58.780749][4970607.630680251], took 5.17min 2025-12-04T10:52:58.7836315Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:52:58.7857987Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:52:58.7864543Z Running distributed/_pycute/test_coalesce 1/1 ... [2025-12-04 10:52:58.786170][4970607.636104397] 2025-12-04T10:52:58.7865214Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:52:58.7868685Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/_pycute/test_coalesce.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:52:58.786635] 2025-12-04T10:53:01.0057488Z 2025-12-04T10:53:01.0059192Z distributed/_pycute/test_coalesce 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed._pycute.test_coalesce_1.1_a786bf060a729c3a_.log 2025-12-04T10:53:01.0060920Z Running 1 items in this shard: test/distributed/_pycute/test_coalesce.py::TestCoalesce::test_coalesce 2025-12-04T10:53:01.0061466Z 2025-12-04T10:53:01.0061901Z Finished distributed/_pycute/test_coalesce 1/1 ... [2025-12-04 10:53:01.005378][4970609.855310296], took 0.04min 2025-12-04T10:53:01.0078903Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:53:01.0103505Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:53:01.0109976Z Running distributed/_pycute/test_complement 1/1 ... [2025-12-04 10:53:01.010683][4970609.860616333] 2025-12-04T10:53:01.0110788Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:53:01.0114009Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/_pycute/test_complement.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:53:01.011146] 2025-12-04T10:53:03.4807312Z 2025-12-04T10:53:03.4808786Z distributed/_pycute/test_complement 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed._pycute.test_complement_1.1_7bc7c3e0e1cf3261_.log 2025-12-04T10:53:03.4810337Z Running 1 items in this shard: test/distributed/_pycute/test_complement.py::TestComplement::test_complement 2025-12-04T10:53:03.4810979Z 2025-12-04T10:53:03.4811424Z Finished distributed/_pycute/test_complement 1/1 ... [2025-12-04 10:53:03.480329][4970612.330260909], took 0.04min 2025-12-04T10:53:03.4829038Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:53:03.4854669Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:53:03.4862510Z Running distributed/_pycute/test_composition 1/1 ... [2025-12-04 10:53:03.485822][4970612.335755535] 2025-12-04T10:53:03.4863207Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:53:03.4866223Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/_pycute/test_composition.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:53:03.486306] 2025-12-04T10:53:05.9064481Z 2025-12-04T10:53:05.9065879Z distributed/_pycute/test_composition 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed._pycute.test_composition_1.1_6997d3c9b7b93005_.log 2025-12-04T10:53:05.9067168Z Running 1 items in this shard: test/distributed/_pycute/test_composition.py::TestComposition::test_composition 2025-12-04T10:53:05.9067656Z 2025-12-04T10:53:05.9068096Z Finished distributed/_pycute/test_composition 1/1 ... [2025-12-04 10:53:05.905989][4970614.755920378], took 0.04min 2025-12-04T10:53:05.9084473Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:53:05.9110155Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:53:05.9117883Z Running distributed/_pycute/test_int_tuple 1/1 ... [2025-12-04 10:53:05.911546][4970614.761479242] 2025-12-04T10:53:05.9118581Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:53:05.9123716Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/_pycute/test_int_tuple.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:53:05.912115] 2025-12-04T10:53:08.0299872Z 2025-12-04T10:53:08.0301479Z distributed/_pycute/test_int_tuple 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed._pycute.test_int_tuple_1.1_f37b696425874383_.log 2025-12-04T10:53:08.0307244Z Running 12 items in this shard: test/distributed/_pycute/test_int_tuple.py::TestIntTuple::test_crd2idx_basic, test/distributed/_pycute/test_int_tuple.py::TestIntTuple::test_crd2idx_idx2crd_roundtrip, test/distributed/_pycute/test_int_tuple.py::TestIntTuple::test_crd2idx_int_with_tuple_shape, test/distributed/_pycute/test_int_tuple.py::TestIntTuple::test_crd2idx_none, test/distributed/_pycute/test_int_tuple.py::TestIntTuple::test_crd2idx_tuple, test/distributed/_pycute/test_int_tuple.py::TestIntTuple::test_idx2crd_basic, test/distributed/_pycute/test_int_tuple.py::TestIntTuple::test_idx2crd_crd2idx_roundtrip, test/distributed/_pycute/test_int_tuple.py::TestIntTuple::test_idx2crd_tuple, test/distributed/_pycute/test_int_tuple.py::TestIntTuple::test_inner_product, test/distributed/_pycute/test_int_tuple.py::TestIntTuple::test_product, test/distributed/_pycute/test_int_tuple.py::TestIntTuple::test_shape_div, test/distributed/_pycute/test_int_tuple.py::TestIntTuple::test_suffix_product 2025-12-04T10:53:08.0311772Z 2025-12-04T10:53:08.0312223Z Finished distributed/_pycute/test_int_tuple 1/1 ... [2025-12-04 10:53:08.029617][4970616.879555937], took 0.04min 2025-12-04T10:53:08.0317080Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:53:08.0339470Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:53:08.0345551Z Running distributed/_pycute/test_left_inverse 1/1 ... [2025-12-04 10:53:08.034267][4970616.884201334] 2025-12-04T10:53:08.0346260Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:53:08.0349281Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/_pycute/test_left_inverse.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:53:08.034682] 2025-12-04T10:53:10.1536040Z 2025-12-04T10:53:10.1537553Z distributed/_pycute/test_left_inverse 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed._pycute.test_left_inverse_1.1_f3ecd8c0b9f3435d_.log 2025-12-04T10:53:10.1539105Z Running 1 items in this shard: test/distributed/_pycute/test_left_inverse.py::TestLeftInverse::test_left_inverse 2025-12-04T10:53:10.1539684Z 2025-12-04T10:53:10.1540135Z Finished distributed/_pycute/test_left_inverse 1/1 ... [2025-12-04 10:53:10.153244][4970619.00317577], took 0.04min 2025-12-04T10:53:10.1557562Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:53:10.1583317Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:53:10.1591562Z Running distributed/_pycute/test_right_inverse 1/1 ... [2025-12-04 10:53:10.158750][4970619.008683445] 2025-12-04T10:53:10.1592256Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:53:10.1594687Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/_pycute/test_right_inverse.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:53:10.159217] 2025-12-04T10:53:12.2773213Z 2025-12-04T10:53:12.2774799Z distributed/_pycute/test_right_inverse 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed._pycute.test_right_inverse_1.1_2e2bf9433f52b482_.log 2025-12-04T10:53:12.2776349Z Running 1 items in this shard: test/distributed/_pycute/test_right_inverse.py::TestRightInverse::test_right_inverse 2025-12-04T10:53:12.2776951Z 2025-12-04T10:53:12.2777420Z Finished distributed/_pycute/test_right_inverse 1/1 ... [2025-12-04 10:53:12.276956][4970621.126889785], took 0.04min 2025-12-04T10:53:12.2794866Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:53:12.2819880Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:53:12.2825624Z Running distributed/tensor/debug/test_debug_mode 1/1 ... [2025-12-04 10:53:12.282203][4970621.132137643] 2025-12-04T10:53:12.2826377Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:53:12.2828982Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/tensor/debug/test_debug_mode.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:53:12.282643] 2025-12-04T10:53:53.4691722Z 2025-12-04T10:53:53.4693359Z distributed/tensor/debug/test_debug_mode 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.tensor.debug.test_debug_mode_1.1_750cc3d55437800a_.log 2025-12-04T10:53:53.4707143Z Running 25 items in this shard: test/distributed/tensor/debug/test_debug_mode.py::TestDTensorDebugMode::test_check_hash_mismatches, test/distributed/tensor/debug/test_debug_mode.py::TestDTensorDebugMode::test_check_structure_mismatches, test/distributed/tensor/debug/test_debug_mode.py::TestDTensorDebugMode::test_check_triton_hash_mismatches, test/distributed/tensor/debug/test_debug_mode.py::TestDTensorDebugMode::test_compile, test/distributed/tensor/debug/test_debug_mode.py::TestDTensorDebugMode::test_debug_mode_backward, test/distributed/tensor/debug/test_debug_mode.py::TestDTensorDebugMode::test_debug_mode_densor_redistribution_trace, test/distributed/tensor/debug/test_debug_mode.py::TestDTensorDebugMode::test_debug_mode_einsum, test/distributed/tensor/debug/test_debug_mode.py::TestDTensorDebugMode::test_debug_mode_higher_order_cond, test/distributed/tensor/debug/test_debug_mode.py::TestDTensorDebugMode::test_debug_mode_mm, test/distributed/tensor/debug/test_debug_mode.py::TestDTensorDebugMode::test_debug_string_inside_context, test/distributed/tensor/debug/test_debug_mode.py::TestDTensorDebugMode::test_fake_tensor, test/distributed/tensor/debug/test_debug_mode.py::TestDTensorDebugMode::test_nested_debug_mode_has_inner_mode_False_has_outer_mode_False, test/distributed/tensor/debug/test_debug_mode.py::TestDTensorDebugMode::test_nested_debug_mode_has_inner_mode_False_has_outer_mode_True, test/distributed/tensor/debug/test_debug_mode.py::TestDTensorDebugMode::test_nested_debug_mode_has_inner_mode_True_has_outer_mode_False, test/distributed/tensor/debug/test_debug_mode.py::TestDTensorDebugMode::test_nested_debug_mode_has_inner_mode_True_has_outer_mode_True, test/distributed/tensor/debug/test_debug_mode.py::TestDTensorDebugMode::test_nn_module, test/distributed/tensor/debug/test_debug_mode.py::TestDTensorDebugMode::test_pretty_print_dtensor_make_fx, test/distributed/tensor/debug/test_debug_mode.py::TestDTensorDebugMode::test_real_tensor, test/distributed/tensor/debug/test_debug_mode.py::TestDTensorDebugMode::test_tensor_attributes, test/distributed/tensor/debug/test_debug_mode.py::TestDTensorDebugMode::test_tensor_hash_redistribute, test/distributed/tensor/debug/test_debug_mode.py::TestDTensorDebugMode::test_triton_kernel_logs, test/distributed/tensor/debug/test_debug_mode.py::TestDebugModeUtils::test_hash_empty_tenor, test/distributed/tensor/debug/test_debug_mode.py::TestDTensorDebugModeNCCLBackend::test_allgather_base, test/distributed/tensor/debug/test_debug_mode.py::TestDTensorDebugModeNCCLBackend::test_allgather_base_async_op, test/distributed/tensor/debug/test_debug_mode.py::TestDTensorDebugModeNCCLBackend::test_allgather_functional_with_async_collective_tensor 2025-12-04T10:53:53.4721065Z 2025-12-04T10:53:53.4721541Z Finished distributed/tensor/debug/test_debug_mode 1/1 ... [2025-12-04 10:53:53.468776][4970662.318708096], took 0.69min 2025-12-04T10:53:53.4723043Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:53:53.4739495Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:53:53.4745359Z Running distributed/_composable/test_replicate 1/1 ... [2025-12-04 10:53:53.474323][4970662.324257341] 2025-12-04T10:53:53.4746296Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:53:53.4750038Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/_composable/test_replicate.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:53:53.474775] 2025-12-04T10:54:58.5585203Z 2025-12-04T10:54:58.5589907Z distributed/_composable/test_replicate 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed._composable.test_replicate_1.1_61a7757c8eda0a64_.log 2025-12-04T10:54:58.5599475Z Running 17 items in this shard: test/distributed/_composable/test_replicate.py::ReplicateStateDictTest::test_replicate_non_root_multiple_save_load, test/distributed/_composable/test_replicate.py::ReplicateStateDictTest::test_replicate_single_module_save_load, test/distributed/_composable/test_replicate.py::ReplicateTest::test_replicate_device_id, test/distributed/_composable/test_replicate.py::ReplicateTest::test_replicate_ignore_module, test/distributed/_composable/test_replicate.py::ReplicateTest::test_replicate_move_args_kwargs_to_device, test/distributed/_composable/test_replicate.py::ReplicateTest::test_replicate_multi_module, test/distributed/_composable/test_replicate.py::ReplicateTest::test_replicate_single_module, test/distributed/_composable/test_replicate.py::ReplicateTest::test_replicate_with_kwargs, test/distributed/_composable/test_replicate.py::ReplicateTest::test_replicate_wrong_device_id_type, test/distributed/_composable/test_replicate.py::ReplicateFullyShardInit::test_replicate_device_id, test/distributed/_composable/test_replicate.py::ReplicateFullyShardInit::test_replicate_fully_shard_init, test/distributed/_composable/test_replicate.py::ReplicateFullyShardInit::test_replicate_ignore_module, test/distributed/_composable/test_replicate.py::ReplicateFullyShardInit::test_replicate_move_args_kwargs_to_device, test/distributed/_composable/test_replicate.py::ReplicateFullyShardInit::test_replicate_multi_module, test/distributed/_composable/test_replicate.py::ReplicateFullyShardInit::test_replicate_single_module, test/distributed/_composable/test_replicate.py::ReplicateFullyShardInit::test_replicate_with_kwargs, test/distributed/_composable/test_replicate.py::ReplicateFullyShardInit::test_replicate_wrong_device_id_type 2025-12-04T10:54:58.5608776Z 2025-12-04T10:54:58.5609242Z Finished distributed/_composable/test_replicate 1/1 ... [2025-12-04 10:54:58.558059][4970727.407990405], took 1.08min 2025-12-04T10:54:58.5610768Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:54:58.5633950Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:54:58.5642000Z Running distributed/checkpoint/test_pg_transport 1/1 ... [2025-12-04 10:54:58.563845][4970727.413778717] 2025-12-04T10:54:58.5642704Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:54:58.5646881Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/checkpoint/test_pg_transport.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:54:58.564333] 2025-12-04T10:55:10.4007347Z 2025-12-04T10:55:10.4008987Z distributed/checkpoint/test_pg_transport 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.checkpoint.test_pg_transport_1.1_823924cc4c929986_.log 2025-12-04T10:55:10.4021536Z Running 21 items in this shard: test/distributed/checkpoint/test_pg_transport.py::PgTransportCPU::test_pg_transport, test/distributed/checkpoint/test_pg_transport.py::PgTransportCPU::test_pg_transport_with_mixed_content, test/distributed/checkpoint/test_pg_transport.py::PgTransportCPU::test_pg_transport_with_sharded_tensor, test/distributed/checkpoint/test_pg_transport.py::PgTransportGPU::test_pg_transport, test/distributed/checkpoint/test_pg_transport.py::PgTransportGPU::test_pg_transport_with_mixed_content, test/distributed/checkpoint/test_pg_transport.py::PgTransportGPU::test_pg_transport_with_sharded_tensor, test/distributed/checkpoint/test_pg_transport.py::TestCastTensor::test_cast_tensor_different_dtypes, test/distributed/checkpoint/test_pg_transport.py::TestCastTensor::test_cast_tensor_with_offset, test/distributed/checkpoint/test_pg_transport.py::TestCastTensor::test_cast_tensor_with_stride, test/distributed/checkpoint/test_pg_transport.py::TestPrepareTensor::test_prepare_tensor_basic, test/distributed/checkpoint/test_pg_transport.py::TestPrepareTensor::test_prepare_tensor_different_shapes, test/distributed/checkpoint/test_pg_transport.py::TestPrepareTensor::test_prepare_tensor_with_stride, test/distributed/checkpoint/test_pg_transport.py::TestPrepareStateDict::test_prepare_state_dict_basic, test/distributed/checkpoint/test_pg_transport.py::TestPrepareStateDict::test_prepare_state_dict_nested, test/distributed/checkpoint/test_pg_transport.py::TestPrepareStateDict::test_prepare_state_dict_with_non_tensor_values, test/distributed/checkpoint/test_pg_transport.py::TestPGTransportMocked::test_recv_checkpoint_basic, test/distributed/checkpoint/test_pg_transport.py::TestPGTransportMocked::test_recv_checkpoint_with_state_dict_callback, test/distributed/checkpoint/test_pg_transport.py::TestPGTransportMocked::test_send_checkpoint_basic, test/distributed/checkpoint/test_pg_transport.py::TestPGTransportMocked::test_send_checkpoint_empty_state_dict, test/distributed/checkpoint/test_pg_transport.py::TestPGTransportMocked::test_send_checkpoint_with_non_tensor_values, test/distributed/checkpoint/test_pg_transport.py::TestPGTransportEdgeCases::test_send_checkpoint_with_cpu_tensors 2025-12-04T10:55:10.4032190Z 2025-12-04T10:55:10.4032659Z Finished distributed/checkpoint/test_pg_transport 1/1 ... [2025-12-04 10:55:10.400393][4970739.25032384], took 0.20min 2025-12-04T10:55:10.4034154Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:55:10.4056347Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:55:10.4063287Z Running distributed/_composable/fsdp/test_fully_shard_mixed_precision 1/1 ... [2025-12-04 10:55:10.406144][4970739.256076752] 2025-12-04T10:55:10.4064428Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:55:10.4068363Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/_composable/fsdp/test_fully_shard_mixed_precision.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:55:10.406609] 2025-12-04T10:56:19.0979457Z 2025-12-04T10:56:19.0981558Z distributed/_composable/fsdp/test_fully_shard_mixed_precision 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed._composable.fsdp.test_fully_shard_mixed_precision_1.1_e8c2b782dc46456b_.log 2025-12-04T10:56:19.0989497Z Running 9 items in this shard: test/distributed/_composable/fsdp/test_fully_shard_mixed_precision.py::TestFullyShardMixedPrecisionTraining::test_compute_dtype, test/distributed/_composable/fsdp/test_fully_shard_mixed_precision.py::TestFullyShardMixedPrecisionTraining::test_grad_acc_with_reduce_dtype, test/distributed/_composable/fsdp/test_fully_shard_mixed_precision.py::TestFullyShardMixedPrecisionTraining::test_reduce_dtype, test/distributed/_composable/fsdp/test_fully_shard_mixed_precision.py::TestFullyShardMixedPrecisionCasts::test_clamp_reduce_dtype, test/distributed/_composable/fsdp/test_fully_shard_mixed_precision.py::TestFullyShardMixedPrecisionCasts::test_dataclass_input, test/distributed/_composable/fsdp/test_fully_shard_mixed_precision.py::TestFullyShardMixedPrecisionCasts::test_float16_on_one_submodule, test/distributed/_composable/fsdp/test_fully_shard_mixed_precision.py::TestFullyShardMixedPrecisionCasts::test_norm_modules_bf16, test/distributed/_composable/fsdp/test_fully_shard_mixed_precision.py::TestFullyShardMixedPrecisionCasts::test_norm_modules_fp16, test/distributed/_composable/fsdp/test_fully_shard_mixed_precision.py::TestFullyShardMixedPrecisionCasts::test_submodules_with_external_inputs 2025-12-04T10:56:19.0995800Z 2025-12-04T10:56:19.0996381Z Finished distributed/_composable/fsdp/test_fully_shard_mixed_precision 1/1 ... [2025-12-04 10:56:19.097476][4970807.947407988], took 1.14min 2025-12-04T10:56:19.1002621Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:56:19.1028001Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:56:19.1034263Z Running distributed/checkpoint/test_utils 1/1 ... [2025-12-04 10:56:19.103147][4970807.953080442] 2025-12-04T10:56:19.1034957Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:56:19.1038247Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/checkpoint/test_utils.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:56:19.103610] 2025-12-04T10:56:55.9386317Z 2025-12-04T10:56:55.9391495Z distributed/checkpoint/test_utils 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.checkpoint.test_utils_1.1_fdf4dd0bc166b6de_.log 2025-12-04T10:56:55.9399771Z Running 16 items in this shard: test/distributed/checkpoint/test_utils.py::TestMedatadaIndex::test_dcp_logger, test/distributed/checkpoint/test_utils.py::TestMedatadaIndex::test_flat_data, test/distributed/checkpoint/test_utils.py::TestMedatadaIndex::test_index_hint_ignored_on_equals, test/distributed/checkpoint/test_utils.py::TestMedatadaIndex::test_index_hint_ignored_on_hash, test/distributed/checkpoint/test_utils.py::TestMedatadaIndex::test_init_convert_offset, test/distributed/checkpoint/test_utils.py::TestMedatadaIndex::test_sharded_tensor_lookup, test/distributed/checkpoint/test_utils.py::TestReaderView::testAllRead, test/distributed/checkpoint/test_utils.py::TestReaderView::testLongRead, test/distributed/checkpoint/test_utils.py::TestReaderView::testLongReadinto, test/distributed/checkpoint/test_utils.py::TestReaderView::testShortRead, test/distributed/checkpoint/test_utils.py::TestReaderView::testShortReadinto, test/distributed/checkpoint/test_utils.py::TestDistWrapper::test_barrier, test/distributed/checkpoint/test_utils.py::TestDistWrapper::test_broadcast_object_global_local_mismatch, test/distributed/checkpoint/test_utils.py::TestDistWrapper::test_broadcast_object_with_nonzero_coordinator, test/distributed/checkpoint/test_utils.py::TestDistWrapper::test_gather_object, test/distributed/checkpoint/test_utils.py::TestDistWrapper::test_scatter_object 2025-12-04T10:56:55.9406639Z 2025-12-04T10:56:55.9407090Z Finished distributed/checkpoint/test_utils 1/1 ... [2025-12-04 10:56:55.938291][4970844.788223189], took 0.61min 2025-12-04T10:56:55.9410193Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:56:55.9434496Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:56:55.9441051Z Running distributed/checkpoint/_experimental/test_checkpoint_process 1/1 ... [2025-12-04 10:56:55.943889][4970844.793822963] 2025-12-04T10:56:55.9441874Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:56:55.9445694Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/checkpoint/_experimental/test_checkpoint_process.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:56:55.944341] 2025-12-04T10:57:14.6917299Z 2025-12-04T10:57:14.6917759Z distributed/checkpoint/_experimental/test_checkpoint_process 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.checkpoint._experimental.test_checkpoint_process_1.1_461b44d1f1b1f3a6_.log 2025-12-04T10:57:14.6920988Z Running 15 items in this shard: test/distributed/checkpoint/_experimental/test_checkpoint_process.py::TestRequestTypes::test_request_type_enum, test/distributed/checkpoint/_experimental/test_checkpoint_process.py::TestRequestTypes::test_worker_request, test/distributed/checkpoint/_experimental/test_checkpoint_process.py::TestRequestTypes::test_worker_response, test/distributed/checkpoint/_experimental/test_checkpoint_process.py::TestCheckpointProcessConfig::test_custom_options, test/distributed/checkpoint/_experimental/test_checkpoint_process.py::TestCheckpointProcessConfig::test_default_options, test/distributed/checkpoint/_experimental/test_checkpoint_process.py::TestCheckpointProcess::test_checkpoint_process_initialization, test/distributed/checkpoint/_experimental/test_checkpoint_process.py::TestCheckpointProcess::test_checkpoint_write_future_state_dict, test/distributed/checkpoint/_experimental/test_checkpoint_process.py::TestCheckpointProcess::test_checkpoint_write_sync_state_dict, test/distributed/checkpoint/_experimental/test_checkpoint_process.py::TestCheckpointProcess::test_checkpoint_write_with_kwargs, test/distributed/checkpoint/_experimental/test_checkpoint_process.py::TestCheckpointProcess::test_communication_error_handling, test/distributed/checkpoint/_experimental/test_checkpoint_process.py::TestCheckpointProcess::test_forced_termination, test/distributed/checkpoint/_experimental/test_checkpoint_process.py::TestCheckpointProcess::test_graceful_termination, test/distributed/checkpoint/_experimental/test_checkpoint_process.py::TestCheckpointProcess::test_shared_memory_tensor_ipc, test/distributed/checkpoint/_experimental/test_checkpoint_process.py::TestCheckpointProcess::test_subprocess_initialization_failure, test/distributed/checkpoint/_experimental/test_checkpoint_process.py::TestCheckpointProcess::test_subprocess_initialization_timeout 2025-12-04T10:57:14.6927128Z 2025-12-04T10:57:14.6927697Z Finished distributed/checkpoint/_experimental/test_checkpoint_process 1/1 ... [2025-12-04 10:57:14.691510][4970863.54144299], took 0.31min 2025-12-04T10:57:14.6941356Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:57:14.6966946Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:57:14.6972787Z Running distributed/test_c10d_logger 1/1 ... [2025-12-04 10:57:14.697105][4970863.547038505] 2025-12-04T10:57:14.6973425Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:57:14.6977713Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/test_c10d_logger.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:57:14.697552] 2025-12-04T10:57:25.3824432Z 2025-12-04T10:57:25.3825909Z distributed/test_c10d_logger 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.test_c10d_logger_1.1_4115b272e1b13e83_.log 2025-12-04T10:57:25.3827699Z Running 2 items in this shard: test/distributed/test_c10d_logger.py::C10dErrorLoggerTest::test_exception_logger, test/distributed/test_c10d_logger.py::C10dErrorLoggerTest::test_get_or_create_logger 2025-12-04T10:57:25.3828704Z 2025-12-04T10:57:25.3829116Z Finished distributed/test_c10d_logger 1/1 ... [2025-12-04 10:57:25.382027][4970874.231959291], took 0.18min 2025-12-04T10:57:25.3849056Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T10:57:25.3874737Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T10:57:25.3880263Z Running distributed/_composable/test_replicate_training 1/1 ... [2025-12-04 10:57:25.387765][4970874.237699264] 2025-12-04T10:57:25.3881187Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T10:57:25.3884513Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/_composable/test_replicate_training.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 10:57:25.388222] 2025-12-04T11:00:17.1136408Z 2025-12-04T11:00:17.1138095Z distributed/_composable/test_replicate_training 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed._composable.test_replicate_training_1.1_c6c33d66b33f9aa1_.log 2025-12-04T11:00:17.1149157Z Running 17 items in this shard: test/distributed/_composable/test_replicate_training.py::TestReplicateForwardInputs::test_root_move_forward_input_to_device, test/distributed/_composable/test_replicate_training.py::TestReplicateRegisteredParams::test_param_registration_after_backward, test/distributed/_composable/test_replicate_training.py::TestReplicateRegisteredParams::test_param_registration_after_forward, test/distributed/_composable/test_replicate_training.py::TestReplicateCastAfterInit::test_to_float64_after_init, test/distributed/_composable/test_replicate_training.py::TestReplicate1DTrainingCore::test_explicit_prefetching, test/distributed/_composable/test_replicate_training.py::TestReplicate1DTrainingCore::test_multi_forward_module, test/distributed/_composable/test_replicate_training.py::TestReplicate1DTrainingCore::test_non_root_forward_backward, test/distributed/_composable/test_replicate_training.py::TestReplicate1DTrainingCore::test_post_optim_event, test/distributed/_composable/test_replicate_training.py::TestReplicate1DTrainingCore::test_train_parity_multi_group_cpu_offload_eager, test/distributed/_composable/test_replicate_training.py::TestReplicate1DTrainingCore::test_train_parity_multi_groups, test/distributed/_composable/test_replicate_training.py::TestReplicate1DTrainingCore::test_train_parity_single_group, test/distributed/_composable/test_replicate_training.py::TestReplicateTrainingCompose::test_train_parity_with_activation_checkpointing, test/distributed/_composable/test_replicate_training.py::TestReplicateSharedParams::test_train_parity_with_shared_params, test/distributed/_composable/test_replicate_training.py::TestReplicateGradientAccumulation::test_1f1b_microbatching, test/distributed/_composable/test_replicate_training.py::TestReplicateGradientAccumulation::test_gradient_accumulation, test/distributed/_composable/test_replicate_training.py::TestReplicateCustomForwardMethod::test_register_fsdp_forward_method, test/distributed/_composable/test_replicate_training.py::TestReplicateTPTraining::test_replicate_tp 2025-12-04T11:00:17.1160194Z 2025-12-04T11:00:17.1160772Z Finished distributed/_composable/test_replicate_training 1/1 ... [2025-12-04 11:00:17.113385][4971045.963314971], took 2.86min 2025-12-04T11:00:17.1164153Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T11:00:17.1191826Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T11:00:17.1198893Z Running distributed/optim/test_apply_optimizer_in_backward 1/1 ... [2025-12-04 11:00:17.119619][4971045.969551454] 2025-12-04T11:00:17.1199644Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T11:00:17.1203524Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/optim/test_apply_optimizer_in_backward.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 11:00:17.120113] 2025-12-04T11:00:18.3796216Z 2025-12-04T11:00:18.3797899Z distributed/optim/test_apply_optimizer_in_backward 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.optim.test_apply_optimizer_in_backward_1.1_710f9e13d720bb74_.log 2025-12-04T11:00:18.3798505Z 2025-12-04T11:00:18.3798787Z Finished distributed/optim/test_apply_optimizer_in_backward 1/1 ... [2025-12-04 11:00:18.379328][4971047.229261008], took 0.02min 2025-12-04T11:00:18.3821514Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T11:00:18.3848957Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T11:00:18.3855976Z Running distributed/fsdp/test_fsdp_uneven 1/1 ... [2025-12-04 11:00:18.385432][4971047.235364821] 2025-12-04T11:00:18.3856668Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T11:00:18.3861145Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/fsdp/test_fsdp_uneven.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 11:00:18.385908] 2025-12-04T11:01:00.4778122Z 2025-12-04T11:01:00.4779512Z PRINTING LOG FILE of distributed/fsdp/test_fsdp_uneven 1/1 (test/test-reports/distributed.fsdp.test_fsdp_uneven_1.1_a8a4caae48d3fe02_.log) 2025-12-04T11:01:00.4781193Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_uneven/distributed.fsdp.test_fsdp_uneven-32a79c68ea2ed6e8.xml 2025-12-04T11:01:00.4782859Z ============================= test session starts ============================== 2025-12-04T11:01:00.4783726Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T11:01:00.4784380Z cachedir: .pytest_cache 2025-12-04T11:01:00.4785196Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T11:01:00.4786022Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T11:01:00.4786429Z configfile: pytest.ini 2025-12-04T11:01:00.4787214Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T11:01:00.4788172Z collecting ... collected 1 item 2025-12-04T11:01:00.4788641Z stepcurrent: Cannot find last run test, not skipping 2025-12-04T11:01:00.4789558Z Running 1 items in this shard: test/distributed/fsdp/test_fsdp_uneven.py::TestUnevenParamShardCUDA::test_one_iteration_cuda 2025-12-04T11:01:00.4790203Z 2025-12-04T11:01:00.4791267Z distributed/fsdp/test_fsdp_uneven.py::TestUnevenParamShardCUDA::test_one_iteration_cuda I1204 11:00:20.223000 261082 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 261151 2025-12-04T11:01:00.4792875Z I1204 11:00:20.223000 261082 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 261152 2025-12-04T11:01:00.4794059Z I1204 11:00:20.224000 261082 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 261153 2025-12-04T11:01:00.4795192Z I1204 11:00:20.225000 261082 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 261154 2025-12-04T11:01:00.4796300Z [rank1]:E1204 11:00:28.846000 261152 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T11:01:00.4797432Z [rank1]:E1204 11:00:28.846000 261152 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T11:01:00.4799068Z [rank1]:E1204 11:00:28.846000 261152 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T11:01:00.4800955Z [rank1]:E1204 11:00:28.846000 261152 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T11:01:00.4802554Z [rank1]:E1204 11:00:28.846000 261152 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T11:01:00.4804053Z [rank1]:E1204 11:00:28.846000 261152 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T11:01:00.4805707Z [rank1]:E1204 11:00:28.846000 261152 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.4807247Z [rank1]:E1204 11:00:28.846000 261152 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T11:01:00.4808783Z [rank1]:E1204 11:00:28.846000 261152 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.4810302Z [rank1]:E1204 11:00:28.846000 261152 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T11:01:00.4811869Z [rank1]:E1204 11:00:28.846000 261152 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T11:01:00.4813356Z [rank1]:E1204 11:00:28.846000 261152 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T11:01:00.4814957Z [rank1]:E1204 11:00:28.846000 261152 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T11:01:00.4816494Z [rank1]:E1204 11:00:28.846000 261152 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T11:01:00.4818703Z [rank1]:E1204 11:00:28.846000 261152 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestUnevenParamShardCUDA.test_one_iteration_cuda! Caching allocator allocated memory was 512 and is now reported as 1024 on device 1. CUDA driver allocated memory was 2317352960 and is now 3307208704. 2025-12-04T11:01:00.4820778Z [rank1]:E1204 11:00:28.846000 261152 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T11:01:00.4821944Z [rank1]:E1204 11:00:28.846000 261152 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T11:01:00.4823833Z [rank1]:E1204 11:00:28.846000 261152 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_uneven.py TestUnevenParamShardCUDA.test_one_iteration_cuda 2025-12-04T11:01:00.4825421Z [rank1]:E1204 11:00:28.846000 261152 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T11:01:00.4826628Z [rank1]:E1204 11:00:28.846000 261152 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T11:01:00.4828010Z [rank1]:E1204 11:00:28.846000 261152 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T11:01:00.4828827Z dist init r=1, world=4 2025-12-04T11:01:00.4829513Z [rank2]:E1204 11:00:28.866000 261153 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T11:01:00.4830679Z [rank2]:E1204 11:00:28.866000 261153 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T11:01:00.4832395Z [rank2]:E1204 11:00:28.866000 261153 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T11:01:00.4833989Z [rank2]:E1204 11:00:28.866000 261153 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T11:01:00.4835572Z [rank2]:E1204 11:00:28.866000 261153 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T11:01:00.4837056Z [rank2]:E1204 11:00:28.866000 261153 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T11:01:00.4838512Z [rank2]:E1204 11:00:28.866000 261153 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.4840053Z [rank2]:E1204 11:00:28.866000 261153 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T11:01:00.4841660Z [rank2]:E1204 11:00:28.866000 261153 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.4843196Z [rank2]:E1204 11:00:28.866000 261153 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T11:01:00.4844836Z [rank2]:E1204 11:00:28.866000 261153 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T11:01:00.4846327Z [rank2]:E1204 11:00:28.866000 261153 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T11:01:00.4847840Z [rank2]:E1204 11:00:28.866000 261153 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T11:01:00.4849378Z [rank2]:E1204 11:00:28.866000 261153 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T11:01:00.4851545Z [rank2]:E1204 11:00:28.866000 261153 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestUnevenParamShardCUDA.test_one_iteration_cuda! Caching allocator allocated memory was 512 and is now reported as 1024 on device 2. CUDA driver allocated memory was 2300575744 and is now 3290431488. 2025-12-04T11:01:00.4853511Z [rank2]:E1204 11:00:28.866000 261153 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T11:01:00.4854671Z [rank2]:E1204 11:00:28.866000 261153 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T11:01:00.4856539Z [rank2]:E1204 11:00:28.866000 261153 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_uneven.py TestUnevenParamShardCUDA.test_one_iteration_cuda 2025-12-04T11:01:00.4858119Z [rank2]:E1204 11:00:28.866000 261153 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T11:01:00.4859312Z [rank2]:E1204 11:00:28.866000 261153 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T11:01:00.4860730Z [rank2]:E1204 11:00:28.866000 261153 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T11:01:00.4861514Z dist init r=2, world=4 2025-12-04T11:01:00.4862188Z [rank3]:E1204 11:00:28.878000 261154 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T11:01:00.4863386Z [rank3]:E1204 11:00:28.878000 261154 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T11:01:00.4864989Z [rank3]:E1204 11:00:28.878000 261154 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T11:01:00.4866570Z [rank3]:E1204 11:00:28.878000 261154 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T11:01:00.4868162Z [rank3]:E1204 11:00:28.878000 261154 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T11:01:00.4869645Z [rank3]:E1204 11:00:28.878000 261154 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T11:01:00.4871155Z [rank3]:E1204 11:00:28.878000 261154 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.4872694Z [rank3]:E1204 11:00:28.878000 261154 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T11:01:00.4874236Z [rank3]:E1204 11:00:28.878000 261154 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.4875853Z [rank3]:E1204 11:00:28.878000 261154 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T11:01:00.4877381Z [rank3]:E1204 11:00:28.878000 261154 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T11:01:00.4878873Z [rank3]:E1204 11:00:28.878000 261154 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T11:01:00.4880371Z [rank3]:E1204 11:00:28.878000 261154 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T11:01:00.4882006Z [rank3]:E1204 11:00:28.878000 261154 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T11:01:00.4884117Z [rank3]:E1204 11:00:28.878000 261154 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestUnevenParamShardCUDA.test_one_iteration_cuda! Caching allocator allocated memory was 512 and is now reported as 1024 on device 3. CUDA driver allocated memory was 2250244096 and is now 3240099840. 2025-12-04T11:01:00.4886090Z [rank3]:E1204 11:00:28.878000 261154 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T11:01:00.4887242Z [rank3]:E1204 11:00:28.878000 261154 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T11:01:00.4889107Z [rank3]:E1204 11:00:28.878000 261154 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_uneven.py TestUnevenParamShardCUDA.test_one_iteration_cuda 2025-12-04T11:01:00.4890751Z [rank3]:E1204 11:00:28.878000 261154 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T11:01:00.4891943Z [rank3]:E1204 11:00:28.878000 261154 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T11:01:00.4893419Z [rank3]:E1204 11:00:28.878000 261154 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T11:01:00.4894213Z dist init r=3, world=4 2025-12-04T11:01:00.4894907Z [rank0]:E1204 11:00:28.924000 261151 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T11:01:00.4896026Z [rank0]:E1204 11:00:28.924000 261151 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T11:01:00.4897623Z [rank0]:E1204 11:00:28.924000 261151 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T11:01:00.4917393Z [rank0]:E1204 11:00:28.924000 261151 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T11:01:00.4919064Z [rank0]:E1204 11:00:28.924000 261151 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T11:01:00.4920574Z [rank0]:E1204 11:00:28.924000 261151 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T11:01:00.4922132Z [rank0]:E1204 11:00:28.924000 261151 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.4923694Z [rank0]:E1204 11:00:28.924000 261151 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T11:01:00.4925426Z [rank0]:E1204 11:00:28.924000 261151 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.4926964Z [rank0]:E1204 11:00:28.924000 261151 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T11:01:00.4928491Z [rank0]:E1204 11:00:28.924000 261151 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T11:01:00.4929986Z [rank0]:E1204 11:00:28.924000 261151 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T11:01:00.4931570Z [rank0]:E1204 11:00:28.924000 261151 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T11:01:00.4933122Z [rank0]:E1204 11:00:28.924000 261151 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T11:01:00.4935266Z [rank0]:E1204 11:00:28.924000 261151 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestUnevenParamShardCUDA.test_one_iteration_cuda! Caching allocator allocated memory was 512 and is now reported as 1024 on device 0. CUDA driver allocated memory was 2459959296 and is now 3449815040. 2025-12-04T11:01:00.4937249Z [rank0]:E1204 11:00:28.924000 261151 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T11:01:00.4938428Z [rank0]:E1204 11:00:28.924000 261151 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T11:01:00.4940311Z [rank0]:E1204 11:00:28.924000 261151 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_uneven.py TestUnevenParamShardCUDA.test_one_iteration_cuda 2025-12-04T11:01:00.4941971Z [rank0]:E1204 11:00:28.924000 261151 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T11:01:00.4943274Z [rank0]:E1204 11:00:28.924000 261151 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T11:01:00.4944648Z [rank0]:E1204 11:00:28.924000 261151 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T11:01:00.4945462Z dist init r=0, world=4 2025-12-04T11:01:00.4946829Z [rank0]:[W1204 11:00:29.074355908 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T11:01:00.4948199Z FAILED [10.5227s] [100%] 2025-12-04T11:01:00.4948436Z 2025-12-04T11:01:00.4948635Z =================================== FAILURES =================================== 2025-12-04T11:01:00.4949266Z _______________ TestUnevenParamShardCUDA.test_one_iteration_cuda _______________ 2025-12-04T11:01:00.4949849Z Traceback (most recent call last): 2025-12-04T11:01:00.4950715Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T11:01:00.4951530Z self._join_processes(fn) 2025-12-04T11:01:00.4952362Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T11:01:00.4953249Z self._check_return_codes(fn, elapsed_time) 2025-12-04T11:01:00.4954137Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T11:01:00.4955098Z raise RuntimeError(error) 2025-12-04T11:01:00.4955606Z RuntimeError: Process 1 exited with error code 10 and exception: 2025-12-04T11:01:00.4956147Z Traceback (most recent call last): 2025-12-04T11:01:00.4956945Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T11:01:00.4957763Z getattr(self, test_name)() 2025-12-04T11:01:00.4958536Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T11:01:00.4959318Z fn() 2025-12-04T11:01:00.4960000Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.4960828Z method(*args, **kwargs) 2025-12-04T11:01:00.4961563Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.4962338Z method(*args, **kwargs) 2025-12-04T11:01:00.4963074Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T11:01:00.4963820Z with policy(): 2025-12-04T11:01:00.4964532Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T11:01:00.4965314Z raise RuntimeError(msg) 2025-12-04T11:01:00.4966617Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestUnevenParamShardCUDA.test_one_iteration_cuda! Caching allocator allocated memory was 512 and is now reported as 1024 on device 1. CUDA driver allocated memory was 2317352960 and is now 3307208704. 2025-12-04T11:01:00.4967797Z 2025-12-04T11:01:00.4968052Z To execute this test, run the following from the base repo dir: 2025-12-04T11:01:00.4969089Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_uneven.py TestUnevenParamShardCUDA.test_one_iteration_cuda 2025-12-04T11:01:00.4969895Z 2025-12-04T11:01:00.4970192Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T11:01:00.4970672Z 2025-12-04T11:01:00.4970678Z 2025-12-04T11:01:00.4970953Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T11:01:00.4971720Z Process 1 terminated with exit code 10, terminating remaining processes. 2025-12-04T11:01:00.4972938Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_uneven/distributed.fsdp.test_fsdp_uneven-32a79c68ea2ed6e8.xml - 2025-12-04T11:01:00.4974071Z =========================== short test summary info ============================ 2025-12-04T11:01:00.4975143Z FAILED [10.5227s] distributed/fsdp/test_fsdp_uneven.py::TestUnevenParamShardCUDA::test_one_iteration_cuda - RuntimeError: Process 1 exited with error code 10 and exception: 2025-12-04T11:01:00.4976152Z Traceback (most recent call last): 2025-12-04T11:01:00.4976976Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T11:01:00.4977790Z getattr(self, test_name)() 2025-12-04T11:01:00.4978563Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T11:01:00.4979342Z fn() 2025-12-04T11:01:00.4980009Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.4980810Z method(*args, **kwargs) 2025-12-04T11:01:00.4981540Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.4982300Z method(*args, **kwargs) 2025-12-04T11:01:00.4983027Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T11:01:00.4983880Z with policy(): 2025-12-04T11:01:00.4984585Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T11:01:00.4985356Z raise RuntimeError(msg) 2025-12-04T11:01:00.4986675Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestUnevenParamShardCUDA.test_one_iteration_cuda! Caching allocator allocated memory was 512 and is now reported as 1024 on device 1. CUDA driver allocated memory was 2317352960 and is now 3307208704. 2025-12-04T11:01:00.4987849Z 2025-12-04T11:01:00.4988109Z To execute this test, run the following from the base repo dir: 2025-12-04T11:01:00.4989147Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_uneven.py TestUnevenParamShardCUDA.test_one_iteration_cuda 2025-12-04T11:01:00.4989929Z 2025-12-04T11:01:00.4990231Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T11:01:00.4990896Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T11:01:00.4991428Z ============================== 1 failed in 10.53s ============================== 2025-12-04T11:01:00.4991873Z Got exit code 1 2025-12-04T11:01:00.4992194Z Retrying single test... 2025-12-04T11:01:00.4993058Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_uneven/distributed.fsdp.test_fsdp_uneven-1e76176a051c9c3e.xml 2025-12-04T11:01:00.4994018Z ============================= test session starts ============================== 2025-12-04T11:01:00.4994732Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T11:01:00.4995358Z cachedir: .pytest_cache 2025-12-04T11:01:00.4996108Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T11:01:00.4996900Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T11:01:00.4997305Z configfile: pytest.ini 2025-12-04T11:01:00.4998063Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T11:01:00.4998871Z collecting ... collected 1 item 2025-12-04T11:01:00.4999763Z stepcurrent: skipping 0 already run items. Running only test/distributed/fsdp/test_fsdp_uneven.py::TestUnevenParamShardCUDA::test_one_iteration_cuda 2025-12-04T11:01:00.5000912Z Running 1 items in this shard 2025-12-04T11:01:00.5001152Z 2025-12-04T11:01:00.5002117Z distributed/fsdp/test_fsdp_uneven.py::TestUnevenParamShardCUDA::test_one_iteration_cuda I1204 11:00:33.462000 261484 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 261553 2025-12-04T11:01:00.5003686Z I1204 11:00:33.463000 261484 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 261554 2025-12-04T11:01:00.5004832Z I1204 11:00:33.464000 261484 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 261555 2025-12-04T11:01:00.5005964Z I1204 11:00:33.464000 261484 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 261556 2025-12-04T11:01:00.5007059Z [rank2]:E1204 11:00:41.976000 261555 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T11:01:00.5008199Z [rank2]:E1204 11:00:41.976000 261555 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T11:01:00.5009825Z [rank2]:E1204 11:00:41.976000 261555 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T11:01:00.5011488Z [rank2]:E1204 11:00:41.976000 261555 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T11:01:00.5013199Z [rank2]:E1204 11:00:41.976000 261555 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T11:01:00.5014698Z [rank2]:E1204 11:00:41.976000 261555 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T11:01:00.5016163Z [rank2]:E1204 11:00:41.976000 261555 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5017719Z [rank2]:E1204 11:00:41.976000 261555 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T11:01:00.5019256Z [rank2]:E1204 11:00:41.976000 261555 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5020844Z [rank2]:E1204 11:00:41.976000 261555 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T11:01:00.5022389Z [rank2]:E1204 11:00:41.976000 261555 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T11:01:00.5023893Z [rank2]:E1204 11:00:41.976000 261555 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T11:01:00.5025424Z [rank2]:E1204 11:00:41.976000 261555 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T11:01:00.5026981Z [rank2]:E1204 11:00:41.976000 261555 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T11:01:00.5029116Z [rank2]:E1204 11:00:41.976000 261555 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestUnevenParamShardCUDA.test_one_iteration_cuda! Caching allocator allocated memory was 512 and is now reported as 1024 on device 2. CUDA driver allocated memory was 2300575744 and is now 3290431488. 2025-12-04T11:01:00.5031241Z [rank2]:E1204 11:00:41.976000 261555 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T11:01:00.5032410Z [rank2]:E1204 11:00:41.976000 261555 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T11:01:00.5034283Z [rank2]:E1204 11:00:41.976000 261555 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_uneven.py TestUnevenParamShardCUDA.test_one_iteration_cuda 2025-12-04T11:01:00.5035866Z [rank2]:E1204 11:00:41.976000 261555 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T11:01:00.5037075Z [rank2]:E1204 11:00:41.976000 261555 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T11:01:00.5038453Z [rank2]:E1204 11:00:41.976000 261555 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T11:01:00.5039262Z dist init r=2, world=4 2025-12-04T11:01:00.5039938Z [rank3]:E1204 11:00:41.979000 261556 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T11:01:00.5041095Z [rank3]:E1204 11:00:41.979000 261556 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T11:01:00.5042702Z [rank3]:E1204 11:00:41.979000 261556 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T11:01:00.5044406Z [rank3]:E1204 11:00:41.979000 261556 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T11:01:00.5046016Z [rank3]:E1204 11:00:41.979000 261556 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T11:01:00.5047501Z [rank3]:E1204 11:00:41.979000 261556 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T11:01:00.5048948Z [rank3]:E1204 11:00:41.979000 261556 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5050481Z [rank3]:E1204 11:00:41.979000 261556 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T11:01:00.5052082Z [rank3]:E1204 11:00:41.979000 261556 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5054028Z [rank3]:E1204 11:00:41.979000 261556 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T11:01:00.5055584Z [rank3]:E1204 11:00:41.979000 261556 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T11:01:00.5057078Z [rank3]:E1204 11:00:41.979000 261556 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T11:01:00.5058580Z [rank3]:E1204 11:00:41.979000 261556 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T11:01:00.5060128Z [rank3]:E1204 11:00:41.979000 261556 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T11:01:00.5062383Z [rank3]:E1204 11:00:41.979000 261556 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestUnevenParamShardCUDA.test_one_iteration_cuda! Caching allocator allocated memory was 512 and is now reported as 1024 on device 3. CUDA driver allocated memory was 2250244096 and is now 3240099840. 2025-12-04T11:01:00.5064361Z [rank3]:E1204 11:00:41.979000 261556 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T11:01:00.5065521Z [rank3]:E1204 11:00:41.979000 261556 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T11:01:00.5067375Z [rank3]:E1204 11:00:41.979000 261556 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_uneven.py TestUnevenParamShardCUDA.test_one_iteration_cuda 2025-12-04T11:01:00.5068958Z [rank3]:E1204 11:00:41.979000 261556 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T11:01:00.5070164Z [rank3]:E1204 11:00:41.979000 261556 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T11:01:00.5071584Z [rank3]:E1204 11:00:41.979000 261556 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T11:01:00.5072387Z dist init r=3, world=4 2025-12-04T11:01:00.5073054Z [rank1]:E1204 11:00:42.001000 261554 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T11:01:00.5074160Z [rank1]:E1204 11:00:42.001000 261554 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T11:01:00.5075854Z [rank1]:E1204 11:00:42.001000 261554 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T11:01:00.5077431Z [rank1]:E1204 11:00:42.001000 261554 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T11:01:00.5079009Z [rank1]:E1204 11:00:42.001000 261554 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T11:01:00.5080491Z [rank1]:E1204 11:00:42.001000 261554 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T11:01:00.5082056Z [rank1]:E1204 11:00:42.001000 261554 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5083602Z [rank1]:E1204 11:00:42.001000 261554 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T11:01:00.5085141Z [rank1]:E1204 11:00:42.001000 261554 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5086671Z [rank1]:E1204 11:00:42.001000 261554 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T11:01:00.5088204Z [rank1]:E1204 11:00:42.001000 261554 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T11:01:00.5089700Z [rank1]:E1204 11:00:42.001000 261554 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T11:01:00.5091248Z [rank1]:E1204 11:00:42.001000 261554 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T11:01:00.5092864Z [rank1]:E1204 11:00:42.001000 261554 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T11:01:00.5094967Z [rank1]:E1204 11:00:42.001000 261554 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestUnevenParamShardCUDA.test_one_iteration_cuda! Caching allocator allocated memory was 512 and is now reported as 1024 on device 1. CUDA driver allocated memory was 2317352960 and is now 3307208704. 2025-12-04T11:01:00.5096926Z [rank1]:E1204 11:00:42.001000 261554 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T11:01:00.5098077Z [rank1]:E1204 11:00:42.001000 261554 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T11:01:00.5099925Z [rank1]:E1204 11:00:42.001000 261554 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_uneven.py TestUnevenParamShardCUDA.test_one_iteration_cuda 2025-12-04T11:01:00.5101541Z [rank1]:E1204 11:00:42.001000 261554 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T11:01:00.5102736Z [rank1]:E1204 11:00:42.001000 261554 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T11:01:00.5104105Z [rank1]:E1204 11:00:42.001000 261554 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T11:01:00.5104979Z dist init r=1, world=4 2025-12-04T11:01:00.5105646Z [rank0]:E1204 11:00:42.009000 261553 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T11:01:00.5106762Z [rank0]:E1204 11:00:42.009000 261553 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T11:01:00.5108360Z [rank0]:E1204 11:00:42.009000 261553 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T11:01:00.5109956Z [rank0]:E1204 11:00:42.009000 261553 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T11:01:00.5111588Z [rank0]:E1204 11:00:42.009000 261553 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T11:01:00.5113073Z [rank0]:E1204 11:00:42.009000 261553 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T11:01:00.5114528Z [rank0]:E1204 11:00:42.009000 261553 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5116064Z [rank0]:E1204 11:00:42.009000 261553 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T11:01:00.5117601Z [rank0]:E1204 11:00:42.009000 261553 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5119145Z [rank0]:E1204 11:00:42.009000 261553 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T11:01:00.5120726Z [rank0]:E1204 11:00:42.009000 261553 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T11:01:00.5122221Z [rank0]:E1204 11:00:42.009000 261553 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T11:01:00.5123961Z [rank0]:E1204 11:00:42.009000 261553 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T11:01:00.5125540Z [rank0]:E1204 11:00:42.009000 261553 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T11:01:00.5127656Z [rank0]:E1204 11:00:42.009000 261553 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestUnevenParamShardCUDA.test_one_iteration_cuda! Caching allocator allocated memory was 512 and is now reported as 1024 on device 0. CUDA driver allocated memory was 2459959296 and is now 3449815040. 2025-12-04T11:01:00.5129636Z [rank0]:E1204 11:00:42.009000 261553 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T11:01:00.5130850Z [rank0]:E1204 11:00:42.009000 261553 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T11:01:00.5132729Z [rank0]:E1204 11:00:42.009000 261553 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_uneven.py TestUnevenParamShardCUDA.test_one_iteration_cuda 2025-12-04T11:01:00.5134304Z [rank0]:E1204 11:00:42.009000 261553 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T11:01:00.5135518Z [rank0]:E1204 11:00:42.009000 261553 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T11:01:00.5136985Z [rank0]:E1204 11:00:42.009000 261553 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T11:01:00.5137786Z dist init r=0, world=4 2025-12-04T11:01:00.5139135Z [rank0]:[W1204 11:00:42.127224715 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T11:01:00.5140501Z FAILED [10.4237s] [100%] 2025-12-04T11:01:00.5140789Z 2025-12-04T11:01:00.5140984Z =================================== FAILURES =================================== 2025-12-04T11:01:00.5141611Z _______________ TestUnevenParamShardCUDA.test_one_iteration_cuda _______________ 2025-12-04T11:01:00.5142190Z Traceback (most recent call last): 2025-12-04T11:01:00.5143013Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T11:01:00.5143837Z self._join_processes(fn) 2025-12-04T11:01:00.5144664Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T11:01:00.5145547Z self._check_return_codes(fn, elapsed_time) 2025-12-04T11:01:00.5146437Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T11:01:00.5147301Z raise RuntimeError(error) 2025-12-04T11:01:00.5147807Z RuntimeError: Process 2 exited with error code 10 and exception: 2025-12-04T11:01:00.5148340Z Traceback (most recent call last): 2025-12-04T11:01:00.5149139Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T11:01:00.5149946Z getattr(self, test_name)() 2025-12-04T11:01:00.5150768Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T11:01:00.5151543Z fn() 2025-12-04T11:01:00.5152214Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5152982Z method(*args, **kwargs) 2025-12-04T11:01:00.5153807Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5154578Z method(*args, **kwargs) 2025-12-04T11:01:00.5155315Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T11:01:00.5156073Z with policy(): 2025-12-04T11:01:00.5156775Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T11:01:00.5157547Z raise RuntimeError(msg) 2025-12-04T11:01:00.5158867Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestUnevenParamShardCUDA.test_one_iteration_cuda! Caching allocator allocated memory was 512 and is now reported as 1024 on device 2. CUDA driver allocated memory was 2300575744 and is now 3290431488. 2025-12-04T11:01:00.5160070Z 2025-12-04T11:01:00.5160324Z To execute this test, run the following from the base repo dir: 2025-12-04T11:01:00.5161433Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_uneven.py TestUnevenParamShardCUDA.test_one_iteration_cuda 2025-12-04T11:01:00.5162215Z 2025-12-04T11:01:00.5162517Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T11:01:00.5162930Z 2025-12-04T11:01:00.5163135Z Process 3 exited with error code 10 and exception: 2025-12-04T11:01:00.5163602Z Traceback (most recent call last): 2025-12-04T11:01:00.5164413Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T11:01:00.5165308Z getattr(self, test_name)() 2025-12-04T11:01:00.5166083Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T11:01:00.5166858Z fn() 2025-12-04T11:01:00.5167530Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5168306Z method(*args, **kwargs) 2025-12-04T11:01:00.5169037Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5169801Z method(*args, **kwargs) 2025-12-04T11:01:00.5170525Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T11:01:00.5171362Z with policy(): 2025-12-04T11:01:00.5172057Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T11:01:00.5172840Z raise RuntimeError(msg) 2025-12-04T11:01:00.5174081Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestUnevenParamShardCUDA.test_one_iteration_cuda! Caching allocator allocated memory was 512 and is now reported as 1024 on device 3. CUDA driver allocated memory was 2250244096 and is now 3240099840. 2025-12-04T11:01:00.5174854Z 2025-12-04T11:01:00.5175019Z To execute this test, run the following from the base repo dir: 2025-12-04T11:01:00.5175664Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_uneven.py TestUnevenParamShardCUDA.test_one_iteration_cuda 2025-12-04T11:01:00.5176150Z 2025-12-04T11:01:00.5176393Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T11:01:00.5176651Z 2025-12-04T11:01:00.5176659Z 2025-12-04T11:01:00.5176828Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T11:01:00.5177250Z Process 2 terminated with exit code 10, terminating remaining processes. 2025-12-04T11:01:00.5178009Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_uneven/distributed.fsdp.test_fsdp_uneven-1e76176a051c9c3e.xml - 2025-12-04T11:01:00.5178708Z =========================== short test summary info ============================ 2025-12-04T11:01:00.5179432Z FAILED [10.4237s] distributed/fsdp/test_fsdp_uneven.py::TestUnevenParamShardCUDA::test_one_iteration_cuda - RuntimeError: Process 2 exited with error code 10 and exception: 2025-12-04T11:01:00.5180066Z Traceback (most recent call last): 2025-12-04T11:01:00.5180577Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T11:01:00.5181129Z getattr(self, test_name)() 2025-12-04T11:01:00.5181613Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T11:01:00.5182097Z fn() 2025-12-04T11:01:00.5182516Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5182996Z method(*args, **kwargs) 2025-12-04T11:01:00.5183451Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5183927Z method(*args, **kwargs) 2025-12-04T11:01:00.5184384Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T11:01:00.5184854Z with policy(): 2025-12-04T11:01:00.5185287Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T11:01:00.5185764Z raise RuntimeError(msg) 2025-12-04T11:01:00.5186566Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestUnevenParamShardCUDA.test_one_iteration_cuda! Caching allocator allocated memory was 512 and is now reported as 1024 on device 2. CUDA driver allocated memory was 2300575744 and is now 3290431488. 2025-12-04T11:01:00.5187368Z 2025-12-04T11:01:00.5187524Z To execute this test, run the following from the base repo dir: 2025-12-04T11:01:00.5188178Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_uneven.py TestUnevenParamShardCUDA.test_one_iteration_cuda 2025-12-04T11:01:00.5188670Z 2025-12-04T11:01:00.5188852Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T11:01:00.5189111Z 2025-12-04T11:01:00.5189232Z Process 3 exited with error code 10 and exception: 2025-12-04T11:01:00.5189520Z Traceback (most recent call last): 2025-12-04T11:01:00.5190019Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T11:01:00.5190522Z getattr(self, test_name)() 2025-12-04T11:01:00.5191052Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T11:01:00.5191536Z fn() 2025-12-04T11:01:00.5191955Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5192435Z method(*args, **kwargs) 2025-12-04T11:01:00.5192897Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5193376Z method(*args, **kwargs) 2025-12-04T11:01:00.5193827Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T11:01:00.5194291Z with policy(): 2025-12-04T11:01:00.5194729Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T11:01:00.5195214Z raise RuntimeError(msg) 2025-12-04T11:01:00.5196011Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestUnevenParamShardCUDA.test_one_iteration_cuda! Caching allocator allocated memory was 512 and is now reported as 1024 on device 3. CUDA driver allocated memory was 2250244096 and is now 3240099840. 2025-12-04T11:01:00.5196740Z 2025-12-04T11:01:00.5196897Z To execute this test, run the following from the base repo dir: 2025-12-04T11:01:00.5197594Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_uneven.py TestUnevenParamShardCUDA.test_one_iteration_cuda 2025-12-04T11:01:00.5198077Z 2025-12-04T11:01:00.5198262Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T11:01:00.5198648Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T11:01:00.5198974Z ============================== 1 failed in 10.43s ============================== 2025-12-04T11:01:00.5199246Z Got exit code 1 2025-12-04T11:01:00.5199447Z Retrying single test... 2025-12-04T11:01:00.5199980Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_uneven/distributed.fsdp.test_fsdp_uneven-5aa88c83752998a4.xml 2025-12-04T11:01:00.5200573Z ============================= test session starts ============================== 2025-12-04T11:01:00.5201049Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T11:01:00.5201444Z cachedir: .pytest_cache 2025-12-04T11:01:00.5201897Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T11:01:00.5202392Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T11:01:00.5202634Z configfile: pytest.ini 2025-12-04T11:01:00.5203097Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T11:01:00.5203598Z collecting ... collected 1 item 2025-12-04T11:01:00.5204223Z stepcurrent: skipping 0 already run items. Running only test/distributed/fsdp/test_fsdp_uneven.py::TestUnevenParamShardCUDA::test_one_iteration_cuda 2025-12-04T11:01:00.5204783Z Running 1 items in this shard 2025-12-04T11:01:00.5204933Z 2025-12-04T11:01:00.5205530Z distributed/fsdp/test_fsdp_uneven.py::TestUnevenParamShardCUDA::test_one_iteration_cuda I1204 11:00:46.575000 261886 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 261955 2025-12-04T11:01:00.5206572Z I1204 11:00:46.576000 261886 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 261956 2025-12-04T11:01:00.5207281Z I1204 11:00:46.576000 261886 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 261957 2025-12-04T11:01:00.5207982Z I1204 11:00:46.577000 261886 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 261958 2025-12-04T11:01:00.5208662Z [rank1]:E1204 11:00:55.157000 261956 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T11:01:00.5209365Z [rank1]:E1204 11:00:55.157000 261956 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T11:01:00.5210382Z [rank1]:E1204 11:00:55.157000 261956 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T11:01:00.5211474Z [rank1]:E1204 11:00:55.157000 261956 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T11:01:00.5212467Z [rank1]:E1204 11:00:55.157000 261956 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T11:01:00.5213393Z [rank1]:E1204 11:00:55.157000 261956 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T11:01:00.5214310Z [rank1]:E1204 11:00:55.157000 261956 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5215279Z [rank1]:E1204 11:00:55.157000 261956 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T11:01:00.5216309Z [rank1]:E1204 11:00:55.157000 261956 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5217290Z [rank1]:E1204 11:00:55.157000 261956 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T11:01:00.5218255Z [rank1]:E1204 11:00:55.157000 261956 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T11:01:00.5219196Z [rank1]:E1204 11:00:55.157000 261956 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T11:01:00.5220141Z [rank1]:E1204 11:00:55.157000 261956 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T11:01:00.5221139Z [rank1]:E1204 11:00:55.157000 261956 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T11:01:00.5222459Z [rank1]:E1204 11:00:55.157000 261956 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestUnevenParamShardCUDA.test_one_iteration_cuda! Caching allocator allocated memory was 512 and is now reported as 1024 on device 1. CUDA driver allocated memory was 2317352960 and is now 3307208704. 2025-12-04T11:01:00.5223769Z [rank1]:E1204 11:00:55.157000 261956 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T11:01:00.5224495Z [rank1]:E1204 11:00:55.157000 261956 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T11:01:00.5225672Z [rank1]:E1204 11:00:55.157000 261956 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_uneven.py TestUnevenParamShardCUDA.test_one_iteration_cuda 2025-12-04T11:01:00.5226660Z [rank1]:E1204 11:00:55.157000 261956 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T11:01:00.5227418Z [rank1]:E1204 11:00:55.157000 261956 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T11:01:00.5228283Z [rank1]:E1204 11:00:55.157000 261956 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T11:01:00.5228786Z dist init r=1, world=4 2025-12-04T11:01:00.5229207Z [rank2]:E1204 11:00:55.181000 261957 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T11:01:00.5229913Z [rank2]:E1204 11:00:55.181000 261957 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T11:01:00.5230973Z [rank2]:E1204 11:00:55.181000 261957 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T11:01:00.5231973Z [rank2]:E1204 11:00:55.181000 261957 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T11:01:00.5232969Z [rank2]:E1204 11:00:55.181000 261957 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T11:01:00.5233947Z [rank2]:E1204 11:00:55.181000 261957 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T11:01:00.5234922Z [rank2]:E1204 11:00:55.181000 261957 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5235891Z [rank2]:E1204 11:00:55.181000 261957 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T11:01:00.5236863Z [rank2]:E1204 11:00:55.181000 261957 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5237833Z [rank2]:E1204 11:00:55.181000 261957 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T11:01:00.5238801Z [rank2]:E1204 11:00:55.181000 261957 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T11:01:00.5239750Z [rank2]:E1204 11:00:55.181000 261957 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T11:01:00.5240742Z [rank2]:E1204 11:00:55.181000 261957 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T11:01:00.5241714Z [rank2]:E1204 11:00:55.181000 261957 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T11:01:00.5243042Z [rank2]:E1204 11:00:55.181000 261957 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestUnevenParamShardCUDA.test_one_iteration_cuda! Caching allocator allocated memory was 512 and is now reported as 1024 on device 2. CUDA driver allocated memory was 2300575744 and is now 3290431488. 2025-12-04T11:01:00.5244280Z [rank2]:E1204 11:00:55.181000 261957 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T11:01:00.5245003Z [rank2]:E1204 11:00:55.181000 261957 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T11:01:00.5246173Z [rank2]:E1204 11:00:55.181000 261957 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_uneven.py TestUnevenParamShardCUDA.test_one_iteration_cuda 2025-12-04T11:01:00.5247154Z [rank2]:E1204 11:00:55.181000 261957 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T11:01:00.5247909Z [rank2]:E1204 11:00:55.181000 261957 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T11:01:00.5248766Z [rank2]:E1204 11:00:55.181000 261957 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T11:01:00.5249272Z dist init r=2, world=4 2025-12-04T11:01:00.5249700Z [rank3]:E1204 11:00:55.193000 261958 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T11:01:00.5250401Z [rank3]:E1204 11:00:55.193000 261958 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T11:01:00.5251474Z [rank3]:E1204 11:00:55.193000 261958 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T11:01:00.5252477Z [rank3]:E1204 11:00:55.193000 261958 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T11:01:00.5253428Z [rank3]:E1204 11:00:55.193000 261958 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T11:01:00.5254423Z [rank3]:E1204 11:00:55.193000 261958 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T11:01:00.5255349Z [rank3]:E1204 11:00:55.193000 261958 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5256325Z [rank3]:E1204 11:00:55.193000 261958 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T11:01:00.5257307Z [rank3]:E1204 11:00:55.193000 261958 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5258279Z [rank3]:E1204 11:00:55.193000 261958 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T11:01:00.5259261Z [rank3]:E1204 11:00:55.193000 261958 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T11:01:00.5260214Z [rank3]:E1204 11:00:55.193000 261958 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T11:01:00.5261240Z [rank3]:E1204 11:00:55.193000 261958 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T11:01:00.5262270Z [rank3]:E1204 11:00:55.193000 261958 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T11:01:00.5263550Z [rank3]:E1204 11:00:55.193000 261958 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestUnevenParamShardCUDA.test_one_iteration_cuda! Caching allocator allocated memory was 512 and is now reported as 1024 on device 3. CUDA driver allocated memory was 2250244096 and is now 3240099840. 2025-12-04T11:01:00.5264804Z [rank3]:E1204 11:00:55.193000 261958 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T11:01:00.5265536Z [rank3]:E1204 11:00:55.193000 261958 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T11:01:00.5266724Z [rank3]:E1204 11:00:55.193000 261958 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_uneven.py TestUnevenParamShardCUDA.test_one_iteration_cuda 2025-12-04T11:01:00.5267726Z [rank3]:E1204 11:00:55.193000 261958 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T11:01:00.5268496Z [rank3]:E1204 11:00:55.193000 261958 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T11:01:00.5269365Z [rank3]:E1204 11:00:55.193000 261958 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T11:01:00.5269875Z dist init r=3, world=4 2025-12-04T11:01:00.5270299Z [rank0]:E1204 11:00:55.206000 261955 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T11:01:00.5271042Z [rank0]:E1204 11:00:55.206000 261955 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T11:01:00.5272064Z [rank0]:E1204 11:00:55.206000 261955 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T11:01:00.5273069Z [rank0]:E1204 11:00:55.206000 261955 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T11:01:00.5274128Z [rank0]:E1204 11:00:55.206000 261955 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T11:01:00.5275072Z [rank0]:E1204 11:00:55.206000 261955 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T11:01:00.5275995Z [rank0]:E1204 11:00:55.206000 261955 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5276971Z [rank0]:E1204 11:00:55.206000 261955 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T11:01:00.5277955Z [rank0]:E1204 11:00:55.206000 261955 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5278927Z [rank0]:E1204 11:00:55.206000 261955 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T11:01:00.5279898Z [rank0]:E1204 11:00:55.206000 261955 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T11:01:00.5280889Z [rank0]:E1204 11:00:55.206000 261955 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T11:01:00.5281907Z [rank0]:E1204 11:00:55.206000 261955 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T11:01:00.5282878Z [rank0]:E1204 11:00:55.206000 261955 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T11:01:00.5284205Z [rank0]:E1204 11:00:55.206000 261955 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestUnevenParamShardCUDA.test_one_iteration_cuda! Caching allocator allocated memory was 512 and is now reported as 1024 on device 0. CUDA driver allocated memory was 2459959296 and is now 3449815040. 2025-12-04T11:01:00.5285458Z [rank0]:E1204 11:00:55.206000 261955 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T11:01:00.5286200Z [rank0]:E1204 11:00:55.206000 261955 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T11:01:00.5287396Z [rank0]:E1204 11:00:55.206000 261955 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_uneven.py TestUnevenParamShardCUDA.test_one_iteration_cuda 2025-12-04T11:01:00.5288400Z [rank0]:E1204 11:00:55.206000 261955 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T11:01:00.5289169Z [rank0]:E1204 11:00:55.206000 261955 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T11:01:00.5290042Z [rank0]:E1204 11:00:55.206000 261955 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T11:01:00.5290557Z dist init r=0, world=4 2025-12-04T11:01:00.5291461Z [rank0]:[W1204 11:00:55.471774209 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T11:01:00.5292340Z FAILED [10.7227s] [100%] 2025-12-04T11:01:00.5292483Z 2025-12-04T11:01:00.5292664Z =================================== FAILURES =================================== 2025-12-04T11:01:00.5293068Z _______________ TestUnevenParamShardCUDA.test_one_iteration_cuda _______________ 2025-12-04T11:01:00.5293388Z Traceback (most recent call last): 2025-12-04T11:01:00.5293902Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T11:01:00.5294416Z self._join_processes(fn) 2025-12-04T11:01:00.5294930Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T11:01:00.5295490Z self._check_return_codes(fn, elapsed_time) 2025-12-04T11:01:00.5296046Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T11:01:00.5296586Z raise RuntimeError(error) 2025-12-04T11:01:00.5296904Z RuntimeError: Process 1 exited with error code 10 and exception: 2025-12-04T11:01:00.5297245Z Traceback (most recent call last): 2025-12-04T11:01:00.5297749Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T11:01:00.5298255Z getattr(self, test_name)() 2025-12-04T11:01:00.5298855Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T11:01:00.5299341Z fn() 2025-12-04T11:01:00.5299763Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5300313Z method(*args, **kwargs) 2025-12-04T11:01:00.5300826Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5301313Z method(*args, **kwargs) 2025-12-04T11:01:00.5301773Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T11:01:00.5302253Z with policy(): 2025-12-04T11:01:00.5302693Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T11:01:00.5303179Z raise RuntimeError(msg) 2025-12-04T11:01:00.5303991Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestUnevenParamShardCUDA.test_one_iteration_cuda! Caching allocator allocated memory was 512 and is now reported as 1024 on device 1. CUDA driver allocated memory was 2317352960 and is now 3307208704. 2025-12-04T11:01:00.5304746Z 2025-12-04T11:01:00.5304904Z To execute this test, run the following from the base repo dir: 2025-12-04T11:01:00.5305563Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_uneven.py TestUnevenParamShardCUDA.test_one_iteration_cuda 2025-12-04T11:01:00.5306075Z 2025-12-04T11:01:00.5306261Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T11:01:00.5306529Z 2025-12-04T11:01:00.5306657Z Process 3 exited with error code 10 and exception: 2025-12-04T11:01:00.5306954Z Traceback (most recent call last): 2025-12-04T11:01:00.5307466Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T11:01:00.5307979Z getattr(self, test_name)() 2025-12-04T11:01:00.5308467Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T11:01:00.5308965Z fn() 2025-12-04T11:01:00.5309393Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5309881Z method(*args, **kwargs) 2025-12-04T11:01:00.5310344Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5310880Z method(*args, **kwargs) 2025-12-04T11:01:00.5311405Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T11:01:00.5311887Z with policy(): 2025-12-04T11:01:00.5312335Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T11:01:00.5312827Z raise RuntimeError(msg) 2025-12-04T11:01:00.5313648Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestUnevenParamShardCUDA.test_one_iteration_cuda! Caching allocator allocated memory was 512 and is now reported as 1024 on device 3. CUDA driver allocated memory was 2250244096 and is now 3240099840. 2025-12-04T11:01:00.5314408Z 2025-12-04T11:01:00.5314566Z To execute this test, run the following from the base repo dir: 2025-12-04T11:01:00.5315226Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_uneven.py TestUnevenParamShardCUDA.test_one_iteration_cuda 2025-12-04T11:01:00.5315729Z 2025-12-04T11:01:00.5315920Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T11:01:00.5316190Z 2025-12-04T11:01:00.5316193Z 2025-12-04T11:01:00.5316360Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T11:01:00.5316789Z Process 1 terminated with exit code 10, terminating remaining processes. 2025-12-04T11:01:00.5317557Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_uneven/distributed.fsdp.test_fsdp_uneven-5aa88c83752998a4.xml - 2025-12-04T11:01:00.5318336Z =========================== short test summary info ============================ 2025-12-04T11:01:00.5319019Z FAILED [10.7227s] distributed/fsdp/test_fsdp_uneven.py::TestUnevenParamShardCUDA::test_one_iteration_cuda - RuntimeError: Process 1 exited with error code 10 and exception: 2025-12-04T11:01:00.5319660Z Traceback (most recent call last): 2025-12-04T11:01:00.5320180Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T11:01:00.5320746Z getattr(self, test_name)() 2025-12-04T11:01:00.5321241Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T11:01:00.5321734Z fn() 2025-12-04T11:01:00.5322160Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5322649Z method(*args, **kwargs) 2025-12-04T11:01:00.5323123Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5323608Z method(*args, **kwargs) 2025-12-04T11:01:00.5324071Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T11:01:00.5324548Z with policy(): 2025-12-04T11:01:00.5324997Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T11:01:00.5325488Z raise RuntimeError(msg) 2025-12-04T11:01:00.5326304Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestUnevenParamShardCUDA.test_one_iteration_cuda! Caching allocator allocated memory was 512 and is now reported as 1024 on device 1. CUDA driver allocated memory was 2317352960 and is now 3307208704. 2025-12-04T11:01:00.5327054Z 2025-12-04T11:01:00.5327217Z To execute this test, run the following from the base repo dir: 2025-12-04T11:01:00.5327879Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_uneven.py TestUnevenParamShardCUDA.test_one_iteration_cuda 2025-12-04T11:01:00.5328373Z 2025-12-04T11:01:00.5328562Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T11:01:00.5328820Z 2025-12-04T11:01:00.5328948Z Process 3 exited with error code 10 and exception: 2025-12-04T11:01:00.5329302Z Traceback (most recent call last): 2025-12-04T11:01:00.5329814Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T11:01:00.5330324Z getattr(self, test_name)() 2025-12-04T11:01:00.5330868Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T11:01:00.5331362Z fn() 2025-12-04T11:01:00.5331789Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5332282Z method(*args, **kwargs) 2025-12-04T11:01:00.5332747Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T11:01:00.5333235Z method(*args, **kwargs) 2025-12-04T11:01:00.5333698Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T11:01:00.5334179Z with policy(): 2025-12-04T11:01:00.5334626Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T11:01:00.5335119Z raise RuntimeError(msg) 2025-12-04T11:01:00.5335938Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestUnevenParamShardCUDA.test_one_iteration_cuda! Caching allocator allocated memory was 512 and is now reported as 1024 on device 3. CUDA driver allocated memory was 2250244096 and is now 3240099840. 2025-12-04T11:01:00.5336754Z 2025-12-04T11:01:00.5336918Z To execute this test, run the following from the base repo dir: 2025-12-04T11:01:00.5337576Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_uneven.py TestUnevenParamShardCUDA.test_one_iteration_cuda 2025-12-04T11:01:00.5338077Z 2025-12-04T11:01:00.5338262Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T11:01:00.5338667Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T11:01:00.5339008Z ============================== 1 failed in 10.74s ============================== 2025-12-04T11:01:00.5339286Z Got exit code 1 2025-12-04T11:01:00.5339728Z FAILED CONSISTENTLY: test/distributed/fsdp/test_fsdp_uneven.py::TestUnevenParamShardCUDA::test_one_iteration_cuda 2025-12-04T11:01:00.5340391Z Test failed consistently, continuing with the rest of the tests due to continue-through-error being set 2025-12-04T11:01:00.5341220Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_uneven/distributed.fsdp.test_fsdp_uneven-b801ae92dd5ce662.xml 2025-12-04T11:01:00.5341836Z ============================= test session starts ============================== 2025-12-04T11:01:00.5342280Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T11:01:00.5342683Z cachedir: .pytest_cache 2025-12-04T11:01:00.5343158Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T11:01:00.5343664Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T11:01:00.5343915Z configfile: pytest.ini 2025-12-04T11:01:00.5344373Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T11:01:00.5344664Z collecting ... collected 1 item / 1 deselected / 0 selected 2025-12-04T11:01:00.5344840Z stepcurrent: skipping 1 already run items. 2025-12-04T11:01:00.5344991Z Running 0 items in this shard 2025-12-04T11:01:00.5345074Z 2025-12-04T11:01:00.5345335Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_uneven/distributed.fsdp.test_fsdp_uneven-b801ae92dd5ce662.xml - 2025-12-04T11:01:00.5345704Z ============================ 1 deselected in 0.00s ============================= 2025-12-04T11:01:00.5346045Z The following tests failed consistently: ['test/distributed/fsdp/test_fsdp_uneven.py::TestUnevenParamShardCUDA::test_one_iteration_cuda'] 2025-12-04T11:01:00.5346282Z 2025-12-04T11:01:00.5346488Z FINISHED PRINTING LOG FILE of distributed/fsdp/test_fsdp_uneven 1/1 (test/test-reports/distributed.fsdp.test_fsdp_uneven_1.1_a8a4caae48d3fe02_.log) 2025-12-04T11:01:00.5346737Z 2025-12-04T11:01:00.5346874Z Finished distributed/fsdp/test_fsdp_uneven 1/1 ... [2025-12-04 11:01:00.478101][4971089.328030941], took 0.70min 2025-12-04T11:01:00.5347338Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T11:01:00.5347761Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T11:01:00.5348002Z GITHUB_RUN_ID, GITHUB_RUN_ATTEMPT, or ARTIFACTS_FILE_SUFFIX not set, not uploading 2025-12-04T11:01:00.5348201Z Uploading artifacts took 0.00 seconds 2025-12-04T11:01:00.5348356Z distributed/fsdp/test_fsdp_uneven 1/1 failed! 2025-12-04T11:01:00.5348582Z Running distributed/tensor/test_op_strategy 1/1 ... [2025-12-04 11:01:00.484455][4971089.334388728] 2025-12-04T11:01:00.5348803Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T11:01:00.5349245Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/tensor/test_op_strategy.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 11:01:00.484934] 2025-12-04T11:02:03.1190582Z 2025-12-04T11:02:03.1192184Z distributed/tensor/test_op_strategy 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.tensor.test_op_strategy_1.1_a86fdac0f3c5dbf9_.log 2025-12-04T11:02:03.1205230Z Running 24 items in this shard: test/distributed/tensor/test_op_strategy.py::TestEinsumDims::test_batch_dims, test/distributed/tensor/test_op_strategy.py::TestEinsumDims::test_bmm_dims, test/distributed/tensor/test_op_strategy.py::TestEinsumDims::test_free_dims, test/distributed/tensor/test_op_strategy.py::TestEinsumDims::test_mm_dims, test/distributed/tensor/test_op_strategy.py::TestEinsumStrategies::test_bmm_1d_mesh, test/distributed/tensor/test_op_strategy.py::TestEinsumStrategies::test_bmm_2d_mesh, test/distributed/tensor/test_op_strategy.py::TestEinsumStrategies::test_bmm_diffinndim_2d_mesh, test/distributed/tensor/test_op_strategy.py::TestEinsumStrategies::test_bmm_diffoutndim_2d_mesh, test/distributed/tensor/test_op_strategy.py::TestEinsumStrategies::test_linearity_1d_mesh, test/distributed/tensor/test_op_strategy.py::TestEinsumStrategies::test_mm_1d_mesh, test/distributed/tensor/test_op_strategy.py::TestEinsumStrategies::test_mm_2d_mesh, test/distributed/tensor/test_op_strategy.py::TestEinsumStrategies::test_pointwise_1d_mesh, test/distributed/tensor/test_op_strategy.py::TestCostModel::test_bmm_strategies, test/distributed/tensor/test_op_strategy.py::TestCostModel::test_mm_strategies, test/distributed/tensor/test_op_strategy.py::TestCostModel::test_redistribute_cost_latency, test/distributed/tensor/test_op_strategy.py::TestCostModel::test_redistribute_cost_mesh_1d, test/distributed/tensor/test_op_strategy.py::TestCostModel::test_redistribute_cost_mesh_2d, test/distributed/tensor/test_op_strategy.py::DistTensorReplicateStrategyRegistrationTest::test_replicate_strategy_placement, test/distributed/tensor/test_op_strategy.py::DistTensorReplicateStrategyRegistrationTest::test_tuple_replicate_strategy_placement, test/distributed/tensor/test_op_strategy.py::TestStrategyHashing::test_call_with_different_nontensor_args, test/distributed/tensor/test_op_strategy.py::TestStrategyOperation::test_cache_clean, test/distributed/tensor/test_op_strategy.py::DistTensorReplicateStrategyRegistrationTestWithLocalTensor::test_replicate_strategy_placement, test/distributed/tensor/test_op_strategy.py::DistTensorReplicateStrategyRegistrationTestWithLocalTensor::test_tuple_replicate_strategy_placement, test/distributed/tensor/test_op_strategy.py::TestStrategyHashingWithLocalTensor::test_call_with_different_nontensor_args 2025-12-04T11:02:03.1216501Z 2025-12-04T11:02:03.1216953Z Finished distributed/tensor/test_op_strategy 1/1 ... [2025-12-04 11:02:03.118774][4971151.968704234], took 1.04min 2025-12-04T11:02:03.1218280Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T11:02:03.1245765Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T11:02:03.1253744Z Running distributed/fsdp/test_fsdp_grad_acc 1/1 ... [2025-12-04 11:02:03.125134][4971151.975066043] 2025-12-04T11:02:03.1254452Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T11:02:03.1258624Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/fsdp/test_fsdp_grad_acc.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 11:02:03.125630] 2025-12-04T11:03:16.1271209Z 2025-12-04T11:03:16.1272741Z distributed/fsdp/test_fsdp_grad_acc 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.fsdp.test_fsdp_grad_acc_1.1_4ebef78eff46df4f_.log 2025-12-04T11:03:16.1276506Z Running 6 items in this shard: test/distributed/fsdp/test_fsdp_grad_acc.py::TestGradAcc::test_grad_acc_configs0_use_orig_params_False, test/distributed/fsdp/test_fsdp_grad_acc.py::TestGradAcc::test_grad_acc_configs0_use_orig_params_True, test/distributed/fsdp/test_fsdp_grad_acc.py::TestGradAcc::test_grad_acc_configs1_use_orig_params_False, test/distributed/fsdp/test_fsdp_grad_acc.py::TestGradAcc::test_grad_acc_configs1_use_orig_params_True, test/distributed/fsdp/test_fsdp_grad_acc.py::TestGradAcc::test_grad_acc_cpu_offload_use_orig_params_False, test/distributed/fsdp/test_fsdp_grad_acc.py::TestGradAcc::test_grad_acc_cpu_offload_use_orig_params_True 2025-12-04T11:03:16.1280410Z 2025-12-04T11:03:16.1280990Z Finished distributed/fsdp/test_fsdp_grad_acc 1/1 ... [2025-12-04 11:03:16.126756][4971224.976686899], took 1.22min 2025-12-04T11:03:16.1299333Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T11:03:16.1327870Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T11:03:16.1334201Z Running distributed/checkpoint/test_state_dict_stager 1/1 ... [2025-12-04 11:03:16.133181][4971224.983114471] 2025-12-04T11:03:16.1334941Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T11:03:16.1338930Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/checkpoint/test_state_dict_stager.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 11:03:16.133680] 2025-12-04T11:03:50.3129470Z 2025-12-04T11:03:50.3131249Z distributed/checkpoint/test_state_dict_stager 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.checkpoint.test_state_dict_stager_1.1_122155c0a5dd88c3_.log 2025-12-04T11:03:50.3139971Z Running 14 items in this shard: test/distributed/checkpoint/test_state_dict_stager.py::TestStateDictStager::test_caching, test/distributed/checkpoint/test_state_dict_stager.py::TestStateDictStager::test_complex_storage_sharing, test/distributed/checkpoint/test_state_dict_stager.py::TestStateDictStager::test_cpu_storage_independence, test/distributed/checkpoint/test_state_dict_stager.py::TestStateDictStager::test_dataclasses, test/distributed/checkpoint/test_state_dict_stager.py::TestStateDictStager::test_different_dtypes, test/distributed/checkpoint/test_state_dict_stager.py::TestStateDictStager::test_empty_tensors, test/distributed/checkpoint/test_state_dict_stager.py::TestStateDictStager::test_tensor_attrs, test/distributed/checkpoint/test_state_dict_stager.py::TestStateDictStager::test_tensor_pinned_and_shared, test/distributed/checkpoint/test_state_dict_stager.py::TestStateDictStager::test_views, test/distributed/checkpoint/test_state_dict_stager.py::TestDTensorStateDictStager::test_dtensor, test/distributed/checkpoint/test_state_dict_stager.py::TestReplicationStager::test_replication_basic, test/distributed/checkpoint/test_state_dict_stager.py::TestReplicationStager::test_replication_dtensors, test/distributed/checkpoint/test_state_dict_stager.py::TestReplicationStager::test_replication_persistence, test/distributed/checkpoint/test_state_dict_stager.py::TestReplicationStager::test_replication_sharded_tensors 2025-12-04T11:03:50.3147114Z 2025-12-04T11:03:50.3147630Z Finished distributed/checkpoint/test_state_dict_stager 1/1 ... [2025-12-04 11:03:50.312697][4971259.162627715], took 0.57min 2025-12-04T11:03:50.3158530Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T11:03:50.3185605Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T11:03:50.3192483Z Running distributed/fsdp/test_fsdp_freezing_weights 1/1 ... [2025-12-04 11:03:50.318964][4971259.168897345] 2025-12-04T11:03:50.3193237Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T11:03:50.3197001Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/fsdp/test_fsdp_freezing_weights.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 11:03:50.319471] 2025-12-04T11:09:33.0913893Z 2025-12-04T11:09:33.0915433Z distributed/fsdp/test_fsdp_freezing_weights 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.fsdp.test_fsdp_freezing_weights_1.1_6e61f62f0e96a28f_.log 2025-12-04T11:09:33.0951306Z Running 32 items in this shard: test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_False_freezing_method_FreezingMethod_GradToNone_freeze_after_wrap_fsdp_False_disable_autograd_False_forward_prefetch_False, test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_False_freezing_method_FreezingMethod_GradToNone_freeze_after_wrap_fsdp_False_disable_autograd_False_forward_prefetch_True, test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_False_freezing_method_FreezingMethod_GradToNone_freeze_after_wrap_fsdp_False_disable_autograd_True_forward_prefetch_False, test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_False_freezing_method_FreezingMethod_GradToNone_freeze_after_wrap_fsdp_False_disable_autograd_True_forward_prefetch_True, test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_False_freezing_method_FreezingMethod_GradToNone_freeze_after_wrap_fsdp_True_disable_autograd_False_forward_prefetch_False, test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_False_freezing_method_FreezingMethod_GradToNone_freeze_after_wrap_fsdp_True_disable_autograd_False_forward_prefetch_True, test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_False_freezing_method_FreezingMethod_GradToNone_freeze_after_wrap_fsdp_True_disable_autograd_True_forward_prefetch_False, test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_False_freezing_method_FreezingMethod_GradToNone_freeze_after_wrap_fsdp_True_disable_autograd_True_forward_prefetch_True, test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_False_freezing_method_FreezingMethod_RequiresGrad_freeze_after_wrap_fsdp_False_disable_autograd_False_forward_prefetch_False, test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_False_freezing_method_FreezingMethod_RequiresGrad_freeze_after_wrap_fsdp_False_disable_autograd_False_forward_prefetch_True, test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_False_freezing_method_FreezingMethod_RequiresGrad_freeze_after_wrap_fsdp_False_disable_autograd_True_forward_prefetch_False, test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_False_freezing_method_FreezingMethod_RequiresGrad_freeze_after_wrap_fsdp_False_disable_autograd_True_forward_prefetch_True, test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_False_freezing_method_FreezingMethod_RequiresGrad_freeze_after_wrap_fsdp_True_disable_autograd_False_forward_prefetch_False, test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_False_freezing_method_FreezingMethod_RequiresGrad_freeze_after_wrap_fsdp_True_disable_autograd_False_forward_prefetch_True, test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_False_freezing_method_FreezingMethod_RequiresGrad_freeze_after_wrap_fsdp_True_disable_autograd_True_forward_prefetch_False, test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_False_freezing_method_FreezingMethod_RequiresGrad_freeze_after_wrap_fsdp_True_disable_autograd_True_forward_prefetch_True, test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_True_freezing_method_FreezingMethod_GradToNone_freeze_after_wrap_fsdp_False_disable_autograd_False_forward_prefetch_False, test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_True_freezing_method_FreezingMethod_GradToNone_freeze_after_wrap_fsdp_False_disable_autograd_False_forward_prefetch_True, test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_True_freezing_method_FreezingMethod_GradToNone_freeze_after_wrap_fsdp_False_disable_autograd_True_forward_prefetch_False, test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_True_freezing_method_FreezingMethod_GradToNone_freeze_after_wrap_fsdp_False_disable_autograd_True_forward_prefetch_True, test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_True_freezing_method_FreezingMethod_GradToNone_freeze_after_wrap_fsdp_True_disable_autograd_False_forward_prefetch_False, test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_True_freezing_method_FreezingMethod_GradToNone_freeze_after_wrap_fsdp_True_disable_autograd_False_forward_prefetch_True, test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_True_freezing_method_FreezingMethod_GradToNone_freeze_after_wrap_fsdp_True_disable_autograd_True_forward_prefetch_False, test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_True_freezing_method_FreezingMethod_GradToNone_freeze_after_wrap_fsdp_True_disable_autograd_True_forward_prefetch_True, test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_True_freezing_method_FreezingMethod_RequiresGrad_freeze_after_wrap_fsdp_False_disable_autograd_False_forward_prefetch_False, test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_True_freezing_method_FreezingMethod_RequiresGrad_freeze_after_wrap_fsdp_False_disable_autograd_False_forward_prefetch_True, test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_True_freezing_method_FreezingMethod_RequiresGrad_freeze_after_wrap_fsdp_False_disable_autograd_True_forward_prefetch_False, test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_True_freezing_method_FreezingMethod_RequiresGrad_freeze_after_wrap_fsdp_False_disable_autograd_True_forward_prefetch_True, test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_True_freezing_method_FreezingMethod_RequiresGrad_freeze_after_wrap_fsdp_True_disable_autograd_False_forward_prefetch_False, test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_True_freezing_method_FreezingMethod_RequiresGrad_freeze_after_wrap_fsdp_True_disable_autograd_False_forward_prefetch_True, test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_True_freezing_method_FreezingMethod_RequiresGrad_freeze_after_wrap_fsdp_True_disable_autograd_True_forward_prefetch_False, test/distributed/fsdp/test_fsdp_freezing_weights.py::TestFreezingWeights::test_freezing_weights_with_nested_trunk_True_freezing_method_FreezingMethod_RequiresGrad_freeze_after_wrap_fsdp_True_disable_autograd_True_forward_prefetch_True 2025-12-04T11:09:33.0985293Z 2025-12-04T11:09:33.0985796Z Finished distributed/fsdp/test_fsdp_freezing_weights 1/1 ... [2025-12-04 11:09:33.091242][4971601.941173931], took 5.71min 2025-12-04T11:09:33.0987426Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T11:09:33.0988725Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T11:09:33.0989504Z Running distributed/_pycute/test_typing 1/1 ... [2025-12-04 11:09:33.097312][4971601.947246131] 2025-12-04T11:09:33.0990155Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T11:09:33.0991595Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/_pycute/test_typing.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 11:09:33.097770] 2025-12-04T11:09:35.2167639Z 2025-12-04T11:09:35.2168842Z distributed/_pycute/test_typing 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed._pycute.test_typing_1.1_8d1617c60c6184e8_.log 2025-12-04T11:09:35.2170269Z Running 1 items in this shard: test/distributed/_pycute/test_typing.py::TestTyping::test_typing 2025-12-04T11:09:35.2170986Z 2025-12-04T11:09:35.2171420Z Finished distributed/_pycute/test_typing 1/1 ... [2025-12-04 11:09:35.216399][4971604.066331852], took 0.04min 2025-12-04T11:09:35.2195145Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T11:09:35.2220344Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T11:09:35.2227584Z Running distributed/test_distributed_spawn 1/7 ... [2025-12-04 11:09:35.222496][4971604.072429362] 2025-12-04T11:09:35.2228375Z MPI not available -- MPI backend tests will be skipped 2025-12-04T11:09:35.2229808Z Running distributed tests for the test backend with env init_method 2025-12-04T11:09:35.2232594Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T11:09:35.2236965Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/test_distributed_spawn.py', '--shard-id=1', '--num-shards=7', '-v', '--subprocess', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 11:09:35.223492] 2025-12-04T11:09:37.1010929Z 2025-12-04T11:09:37.1012469Z distributed/test_distributed_spawn 1/7 was successful, full logs can be found in artifacts with path test/test-reports/distributed.test_distributed_spawn_1.7_99599a9965c9a3ab_.log 2025-12-04T11:09:37.1013589Z Running 0 items in this shard: 2025-12-04T11:09:37.1013855Z 2025-12-04T11:09:37.1021552Z Running distributed tests for the test backend with file init_method 2025-12-04T11:09:37.1022984Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T11:09:37.1027136Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/test_distributed_spawn.py', '--shard-id=1', '--num-shards=7', '-v', '--subprocess', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 11:09:37.102528] 2025-12-04T11:09:38.9959979Z 2025-12-04T11:09:38.9961326Z distributed/test_distributed_spawn 1/7 was successful, full logs can be found in artifacts with path test/test-reports/distributed.test_distributed_spawn_1.7_77cb74a5766c7d13_.log 2025-12-04T11:09:38.9962512Z Running 0 items in this shard: 2025-12-04T11:09:38.9962772Z 2025-12-04T11:09:38.9969766Z Running distributed tests for the nccl backend with env init_method 2025-12-04T11:09:38.9970733Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T11:09:38.9976407Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/test_distributed_spawn.py', '--shard-id=1', '--num-shards=7', '-v', '--subprocess', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 11:09:38.997341] 2025-12-04T11:14:23.9009570Z 2025-12-04T11:14:23.9011297Z distributed/test_distributed_spawn 1/7 was successful, full logs can be found in artifacts with path test/test-reports/distributed.test_distributed_spawn_1.7_d5e8ec9da134a360_.log 2025-12-04T11:14:23.9031829Z Running 38 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync_allreduce_with_then_hook, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_cuda_complex, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_v_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_full_group_min, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_sum_cuda_complex, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_full_group_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_group_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_batch_isend_irecv_ring_exchange_nccl, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_coalescing_manager, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_apply_optim_in_backward, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_apply_optim_in_backward_grad_as_bucket_view_false, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_buffer_hook_allreduce, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_device_mesh_initialization, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_hook_parity_allreduce, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_hook_parity_powerSGD, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_join_model_equivalence, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_multiple_nested_unused_params_err_ignore_params, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_native_mixed_precision_grad_as_bucket_view_no_set_grad_none, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_native_mixed_precision_no_grad_as_bucket_view_set_grad_to_none, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_sync_bn_training_vs_eval, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_uneven_input_exception, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_uneven_inputs_stop_iteration_sync_bn, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_unused_params_rebuild_buckets_exception, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_detect_ddp_is_actually_static, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_different_graph_across_ranks, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_failure_order, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_wait_all_ranks, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_post_localSGD_optimizer_parity_grad_is_view, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_group_sum, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_sum, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_scatter_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_send_recv_torch_profiler, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_static_graph_api_cpu, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_sync_bn_logged, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_verify_model_across_rank_with_logger 2025-12-04T11:14:23.9051388Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel 2025-12-04T11:14:23.9052763Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync_allreduce_with_then_hook 2025-12-04T11:14:23.9054096Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_cuda_complex 2025-12-04T11:14:23.9055272Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_v_cuda 2025-12-04T11:14:23.9056450Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_full_group_min 2025-12-04T11:14:23.9057675Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_sum_cuda_complex 2025-12-04T11:14:23.9058845Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_cuda 2025-12-04T11:14:23.9060010Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_full_group_cuda 2025-12-04T11:14:23.9061244Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_group_cuda 2025-12-04T11:14:23.9062390Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_group 2025-12-04T11:14:23.9063597Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_batch_isend_irecv_ring_exchange_nccl 2025-12-04T11:14:23.9064858Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_coalescing_manager 2025-12-04T11:14:23.9066035Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_apply_optim_in_backward 2025-12-04T11:14:23.9067367Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_apply_optim_in_backward_grad_as_bucket_view_false 2025-12-04T11:14:23.9068812Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_buffer_hook_allreduce 2025-12-04T11:14:23.9070056Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_device_mesh_initialization 2025-12-04T11:14:23.9071325Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_hook_parity_allreduce 2025-12-04T11:14:23.9072519Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_hook_parity_powerSGD 2025-12-04T11:14:23.9073711Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_join_model_equivalence 2025-12-04T11:14:23.9075027Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_multiple_nested_unused_params_err_ignore_params 2025-12-04T11:14:23.9076518Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_native_mixed_precision_grad_as_bucket_view_no_set_grad_none 2025-12-04T11:14:23.9078068Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_native_mixed_precision_no_grad_as_bucket_view_set_grad_to_none 2025-12-04T11:14:23.9079458Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_sync_bn_training_vs_eval 2025-12-04T11:14:23.9080804Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_uneven_input_exception 2025-12-04T11:14:23.9082072Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_uneven_inputs_stop_iteration_sync_bn 2025-12-04T11:14:23.9083441Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_unused_params_rebuild_buckets_exception 2025-12-04T11:14:23.9084737Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_detect_ddp_is_actually_static 2025-12-04T11:14:23.9085958Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_different_graph_across_ranks 2025-12-04T11:14:23.9087192Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_failure_order 2025-12-04T11:14:23.9088453Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_wait_all_ranks 2025-12-04T11:14:23.9089763Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_post_localSGD_optimizer_parity_grad_is_view 2025-12-04T11:14:23.9091041Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_group_sum 2025-12-04T11:14:23.9092133Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_sum 2025-12-04T11:14:23.9093199Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_scatter_cuda 2025-12-04T11:14:23.9094327Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_send_recv_torch_profiler 2025-12-04T11:14:23.9095497Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_static_graph_api_cpu 2025-12-04T11:14:23.9096638Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_sync_bn_logged 2025-12-04T11:14:23.9097872Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_verify_model_across_rank_with_logger 2025-12-04T11:14:23.9098588Z 2025-12-04T11:14:23.9098970Z Running distributed tests for the nccl backend with file init_method 2025-12-04T11:14:23.9099525Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T11:14:23.9100970Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/test_distributed_spawn.py', '--shard-id=1', '--num-shards=7', '-v', '--subprocess', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 11:14:23.903286] 2025-12-04T11:19:07.2284126Z 2025-12-04T11:19:07.2285420Z distributed/test_distributed_spawn 1/7 was successful, full logs can be found in artifacts with path test/test-reports/distributed.test_distributed_spawn_1.7_01b8294e195f5159_.log 2025-12-04T11:19:07.2306501Z Running 38 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync_allreduce_with_then_hook, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_cuda_complex, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_v_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_full_group_min, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_sum_cuda_complex, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_full_group_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_group_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_batch_isend_irecv_ring_exchange_nccl, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_coalescing_manager, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_apply_optim_in_backward, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_apply_optim_in_backward_grad_as_bucket_view_false, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_buffer_hook_allreduce, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_device_mesh_initialization, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_hook_parity_allreduce, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_hook_parity_powerSGD, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_join_model_equivalence, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_multiple_nested_unused_params_err_ignore_params, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_native_mixed_precision_grad_as_bucket_view_no_set_grad_none, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_native_mixed_precision_no_grad_as_bucket_view_set_grad_to_none, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_sync_bn_training_vs_eval, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_uneven_input_exception, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_uneven_inputs_stop_iteration_sync_bn, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_unused_params_rebuild_buckets_exception, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_detect_ddp_is_actually_static, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_different_graph_across_ranks, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_failure_order, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_wait_all_ranks, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_post_localSGD_optimizer_parity_grad_is_view, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_group_sum, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_sum, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_scatter_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_send_recv_torch_profiler, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_static_graph_api_cpu, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_sync_bn_logged, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_verify_model_across_rank_with_logger 2025-12-04T11:19:07.2326179Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel 2025-12-04T11:19:07.2327532Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync_allreduce_with_then_hook 2025-12-04T11:19:07.2328854Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_cuda_complex 2025-12-04T11:19:07.2330008Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_v_cuda 2025-12-04T11:19:07.2331213Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_full_group_min 2025-12-04T11:19:07.2332514Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_sum_cuda_complex 2025-12-04T11:19:07.2333660Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_cuda 2025-12-04T11:19:07.2334831Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_full_group_cuda 2025-12-04T11:19:07.2336015Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_group_cuda 2025-12-04T11:19:07.2337141Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_group 2025-12-04T11:19:07.2338347Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_batch_isend_irecv_ring_exchange_nccl 2025-12-04T11:19:07.2339563Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_coalescing_manager 2025-12-04T11:19:07.2340775Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_apply_optim_in_backward 2025-12-04T11:19:07.2342093Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_apply_optim_in_backward_grad_as_bucket_view_false 2025-12-04T11:19:07.2343409Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_buffer_hook_allreduce 2025-12-04T11:19:07.2344640Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_device_mesh_initialization 2025-12-04T11:19:07.2345845Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_hook_parity_allreduce 2025-12-04T11:19:07.2347029Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_hook_parity_powerSGD 2025-12-04T11:19:07.2348225Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_join_model_equivalence 2025-12-04T11:19:07.2349539Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_multiple_nested_unused_params_err_ignore_params 2025-12-04T11:19:07.2351155Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_native_mixed_precision_grad_as_bucket_view_no_set_grad_none 2025-12-04T11:19:07.2352702Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_native_mixed_precision_no_grad_as_bucket_view_set_grad_to_none 2025-12-04T11:19:07.2354097Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_sync_bn_training_vs_eval 2025-12-04T11:19:07.2355303Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_uneven_input_exception 2025-12-04T11:19:07.2356565Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_uneven_inputs_stop_iteration_sync_bn 2025-12-04T11:19:07.2357915Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_unused_params_rebuild_buckets_exception 2025-12-04T11:19:07.2359221Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_detect_ddp_is_actually_static 2025-12-04T11:19:07.2360441Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_different_graph_across_ranks 2025-12-04T11:19:07.2361735Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_failure_order 2025-12-04T11:19:07.2362978Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_wait_all_ranks 2025-12-04T11:19:07.2364381Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_post_localSGD_optimizer_parity_grad_is_view 2025-12-04T11:19:07.2365613Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_group_sum 2025-12-04T11:19:07.2366698Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_sum 2025-12-04T11:19:07.2367759Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_scatter_cuda 2025-12-04T11:19:07.2368874Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_send_recv_torch_profiler 2025-12-04T11:19:07.2370030Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_static_graph_api_cpu 2025-12-04T11:19:07.2371203Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_sync_bn_logged 2025-12-04T11:19:07.2372393Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_verify_model_across_rank_with_logger 2025-12-04T11:19:07.2373108Z 2025-12-04T11:19:07.2373394Z Running distributed tests for the gloo backend with env init_method 2025-12-04T11:19:07.2373967Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T11:19:07.2375365Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/test_distributed_spawn.py', '--shard-id=1', '--num-shards=7', '-v', '--subprocess', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 11:19:07.230690] 2025-12-04T11:22:33.3254233Z 2025-12-04T11:22:33.3255194Z distributed/test_distributed_spawn 1/7 was successful, full logs can be found in artifacts with path test/test-reports/distributed.test_distributed_spawn_1.7_64d77891495cc28b_.log 2025-12-04T11:22:33.3273947Z Running 38 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync_allreduce_with_then_hook, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_cuda_complex, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_v_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_full_group_min, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_sum_cuda_complex, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_full_group_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_group_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_batch_isend_irecv_ring_exchange_nccl, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_coalescing_manager, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_apply_optim_in_backward, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_apply_optim_in_backward_grad_as_bucket_view_false, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_buffer_hook_allreduce, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_device_mesh_initialization, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_hook_parity_allreduce, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_hook_parity_powerSGD, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_join_model_equivalence, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_multiple_nested_unused_params_err_ignore_params, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_native_mixed_precision_grad_as_bucket_view_no_set_grad_none, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_native_mixed_precision_no_grad_as_bucket_view_set_grad_to_none, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_sync_bn_training_vs_eval, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_uneven_input_exception, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_uneven_inputs_stop_iteration_sync_bn, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_unused_params_rebuild_buckets_exception, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_detect_ddp_is_actually_static, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_different_graph_across_ranks, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_failure_order, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_wait_all_ranks, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_post_localSGD_optimizer_parity_grad_is_view, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_group_sum, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_sum, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_scatter_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_send_recv_torch_profiler, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_static_graph_api_cpu, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_sync_bn_logged, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_verify_model_across_rank_with_logger 2025-12-04T11:22:33.3293411Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel 2025-12-04T11:22:33.3294760Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync_allreduce_with_then_hook 2025-12-04T11:22:33.3296193Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_cuda_complex 2025-12-04T11:22:33.3297362Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_v_cuda 2025-12-04T11:22:33.3298527Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_full_group_min 2025-12-04T11:22:33.3299737Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_sum_cuda_complex 2025-12-04T11:22:33.3300954Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_cuda 2025-12-04T11:22:33.3302116Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_full_group_cuda 2025-12-04T11:22:33.3303310Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_group_cuda 2025-12-04T11:22:33.3304451Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_group 2025-12-04T11:22:33.3305660Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_batch_isend_irecv_ring_exchange_nccl 2025-12-04T11:22:33.3306885Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_coalescing_manager 2025-12-04T11:22:33.3308060Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_apply_optim_in_backward 2025-12-04T11:22:33.3309488Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_apply_optim_in_backward_grad_as_bucket_view_false 2025-12-04T11:22:33.3310875Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_buffer_hook_allreduce 2025-12-04T11:22:33.3312101Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_device_mesh_initialization 2025-12-04T11:22:33.3313318Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_hook_parity_allreduce 2025-12-04T11:22:33.3314517Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_hook_parity_powerSGD 2025-12-04T11:22:33.3315720Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_join_model_equivalence 2025-12-04T11:22:33.3317057Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_multiple_nested_unused_params_err_ignore_params 2025-12-04T11:22:33.3318539Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_native_mixed_precision_grad_as_bucket_view_no_set_grad_none 2025-12-04T11:22:33.3320098Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_native_mixed_precision_no_grad_as_bucket_view_set_grad_to_none 2025-12-04T11:22:33.3321531Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_sync_bn_training_vs_eval 2025-12-04T11:22:33.3322744Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_uneven_input_exception 2025-12-04T11:22:33.3324027Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_uneven_inputs_stop_iteration_sync_bn 2025-12-04T11:22:33.3325388Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_unused_params_rebuild_buckets_exception 2025-12-04T11:22:33.3326709Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_detect_ddp_is_actually_static 2025-12-04T11:22:33.3328031Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_different_graph_across_ranks 2025-12-04T11:22:33.3329274Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_failure_order 2025-12-04T11:22:33.3330529Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_wait_all_ranks 2025-12-04T11:22:33.3331899Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_post_localSGD_optimizer_parity_grad_is_view 2025-12-04T11:22:33.3333144Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_group_sum 2025-12-04T11:22:33.3334236Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_sum 2025-12-04T11:22:33.3335307Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_scatter_cuda 2025-12-04T11:22:33.3336431Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_send_recv_torch_profiler 2025-12-04T11:22:33.3337601Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_static_graph_api_cpu 2025-12-04T11:22:33.3338725Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_sync_bn_logged 2025-12-04T11:22:33.3340004Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_verify_model_across_rank_with_logger 2025-12-04T11:22:33.3340766Z 2025-12-04T11:22:33.3341062Z Running distributed tests for the gloo backend with file init_method 2025-12-04T11:22:33.3341615Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T11:22:33.3343021Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/test_distributed_spawn.py', '--shard-id=1', '--num-shards=7', '-v', '--subprocess', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 11:22:33.326556] 2025-12-04T11:26:00.9983431Z 2025-12-04T11:26:00.9984725Z distributed/test_distributed_spawn 1/7 was successful, full logs can be found in artifacts with path test/test-reports/distributed.test_distributed_spawn_1.7_229cf2c959aab996_.log 2025-12-04T11:26:01.0005241Z Running 38 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync_allreduce_with_then_hook, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_cuda_complex, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_v_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_full_group_min, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_sum_cuda_complex, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_full_group_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_group_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_batch_isend_irecv_ring_exchange_nccl, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_coalescing_manager, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_apply_optim_in_backward, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_apply_optim_in_backward_grad_as_bucket_view_false, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_buffer_hook_allreduce, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_device_mesh_initialization, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_hook_parity_allreduce, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_hook_parity_powerSGD, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_join_model_equivalence, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_multiple_nested_unused_params_err_ignore_params, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_native_mixed_precision_grad_as_bucket_view_no_set_grad_none, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_native_mixed_precision_no_grad_as_bucket_view_set_grad_to_none, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_sync_bn_training_vs_eval, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_uneven_input_exception, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_uneven_inputs_stop_iteration_sync_bn, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_unused_params_rebuild_buckets_exception, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_detect_ddp_is_actually_static, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_different_graph_across_ranks, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_failure_order, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_wait_all_ranks, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_post_localSGD_optimizer_parity_grad_is_view, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_group_sum, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_sum, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_scatter_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_send_recv_torch_profiler, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_static_graph_api_cpu, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_sync_bn_logged, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_verify_model_across_rank_with_logger 2025-12-04T11:26:01.0026371Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel 2025-12-04T11:26:01.0027812Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync_allreduce_with_then_hook 2025-12-04T11:26:01.0029164Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_cuda_complex 2025-12-04T11:26:01.0030378Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_v_cuda 2025-12-04T11:26:01.0031606Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_full_group_min 2025-12-04T11:26:01.0032903Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_sum_cuda_complex 2025-12-04T11:26:01.0034054Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_cuda 2025-12-04T11:26:01.0035221Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_full_group_cuda 2025-12-04T11:26:01.0036407Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_group_cuda 2025-12-04T11:26:01.0038317Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_group 2025-12-04T11:26:01.0039530Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_batch_isend_irecv_ring_exchange_nccl 2025-12-04T11:26:01.0040793Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_coalescing_manager 2025-12-04T11:26:01.0041959Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_apply_optim_in_backward 2025-12-04T11:26:01.0043288Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_apply_optim_in_backward_grad_as_bucket_view_false 2025-12-04T11:26:01.0044596Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_buffer_hook_allreduce 2025-12-04T11:26:01.0045826Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_device_mesh_initialization 2025-12-04T11:26:01.0047043Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_hook_parity_allreduce 2025-12-04T11:26:01.0048232Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_hook_parity_powerSGD 2025-12-04T11:26:01.0049421Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_join_model_equivalence 2025-12-04T11:26:01.0050984Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_multiple_nested_unused_params_err_ignore_params 2025-12-04T11:26:01.0052481Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_native_mixed_precision_grad_as_bucket_view_no_set_grad_none 2025-12-04T11:26:01.0054052Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_native_mixed_precision_no_grad_as_bucket_view_set_grad_to_none 2025-12-04T11:26:01.0055461Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_sync_bn_training_vs_eval 2025-12-04T11:26:01.0056673Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_uneven_input_exception 2025-12-04T11:26:01.0057952Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_uneven_inputs_stop_iteration_sync_bn 2025-12-04T11:26:01.0059317Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_unused_params_rebuild_buckets_exception 2025-12-04T11:26:01.0060677Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_detect_ddp_is_actually_static 2025-12-04T11:26:01.0061903Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_different_graph_across_ranks 2025-12-04T11:26:01.0063139Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_failure_order 2025-12-04T11:26:01.0064398Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_wait_all_ranks 2025-12-04T11:26:01.0065711Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_post_localSGD_optimizer_parity_grad_is_view 2025-12-04T11:26:01.0066967Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_group_sum 2025-12-04T11:26:01.0068056Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_sum 2025-12-04T11:26:01.0069120Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_scatter_cuda 2025-12-04T11:26:01.0070338Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_send_recv_torch_profiler 2025-12-04T11:26:01.0071562Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_static_graph_api_cpu 2025-12-04T11:26:01.0072674Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_sync_bn_logged 2025-12-04T11:26:01.0073862Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_verify_model_across_rank_with_logger 2025-12-04T11:26:01.0074594Z 2025-12-04T11:26:01.0075026Z Finished distributed/test_distributed_spawn 1/7 ... [2025-12-04 11:26:00.999996][4972589.849926845], took 16.43min 2025-12-04T11:26:01.0076491Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T11:26:01.0077780Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T11:26:01.0078535Z GITHUB_RUN_ID, GITHUB_RUN_ATTEMPT, or ARTIFACTS_FILE_SUFFIX not set, not uploading 2025-12-04T11:26:01.0079118Z Uploading artifacts took 0.00 seconds 2025-12-04T11:26:01.0079766Z Running distributed/test_distributed_spawn 4/7 ... [2025-12-04 11:26:01.006651][4972589.856584499] 2025-12-04T11:26:01.0080481Z MPI not available -- MPI backend tests will be skipped 2025-12-04T11:26:01.0081248Z Running distributed tests for the test backend with env init_method 2025-12-04T11:26:01.0081791Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T11:26:01.0083188Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/test_distributed_spawn.py', '--shard-id=4', '--num-shards=7', '-v', '--subprocess', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 11:26:01.007669] 2025-12-04T11:26:02.8836305Z 2025-12-04T11:26:02.8837594Z distributed/test_distributed_spawn 4/7 was successful, full logs can be found in artifacts with path test/test-reports/distributed.test_distributed_spawn_4.7_07d1469ac2174e1e_.log 2025-12-04T11:26:02.8838680Z Running 0 items in this shard: 2025-12-04T11:26:02.8838946Z 2025-12-04T11:26:02.8842247Z Running distributed tests for the test backend with file init_method 2025-12-04T11:26:02.8843692Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T11:26:02.8848493Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/test_distributed_spawn.py', '--shard-id=4', '--num-shards=7', '-v', '--subprocess', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 11:26:02.884613] 2025-12-04T11:26:04.7661673Z 2025-12-04T11:26:04.7663500Z distributed/test_distributed_spawn 4/7 was successful, full logs can be found in artifacts with path test/test-reports/distributed.test_distributed_spawn_4.7_9e432f1e1bf2cbd8_.log 2025-12-04T11:26:04.7664622Z Running 0 items in this shard: 2025-12-04T11:26:04.7664893Z 2025-12-04T11:26:04.7665178Z Running distributed tests for the nccl backend with env init_method 2025-12-04T11:26:04.7665753Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T11:26:04.7667734Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/test_distributed_spawn.py', '--shard-id=4', '--num-shards=7', '-v', '--subprocess', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 11:26:04.766503] 2025-12-04T11:29:50.9847227Z 2025-12-04T11:29:50.9847899Z distributed/test_distributed_spawn 4/7 was successful, full logs can be found in artifacts with path test/test-reports/distributed.test_distributed_spawn_4.7_5ef02bc0621c72c6_.log 2025-12-04T11:29:50.9855409Z Running 39 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallelCPU, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel_SyncBatchNorm_2D_Input, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel_with_amp_and_grad_is_view, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_full_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_simple, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_with_empty, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_into_stack_tensor_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_group_min, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_group_sum, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_product, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_complex, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_full_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_equal_split, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_equal_split_full_group_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_equal_split_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_unequal_split_full_group_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_unequal_split_group_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_full_group_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_timeout_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_batch_isend_irecv_gloo, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_compile_static_graph, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_device, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_logging_data_gpu, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_model_diff_shape_across_ranks, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_new_tensor_in_fwd_static_graph, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_destroy_full_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_gather, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_future, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_rank_size_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_isend_autograd_profiler, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_allreduce_hang_wait_all_ranks, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_nccl_backend_bool_reduce, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_with_group_param, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_sum_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_scatter, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_scatter_complex, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_send_recv_nccl_autograd_profiler 2025-12-04T11:29:50.9874400Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallelCPU 2025-12-04T11:29:50.9875836Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel_SyncBatchNorm_2D_Input 2025-12-04T11:29:50.9877289Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel_with_amp_and_grad_is_view 2025-12-04T11:29:50.9878695Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_full_group 2025-12-04T11:29:50.9879938Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_simple 2025-12-04T11:29:50.9881248Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_with_empty 2025-12-04T11:29:50.9882510Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_into_stack_tensor_cuda 2025-12-04T11:29:50.9883741Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_group_min 2025-12-04T11:29:50.9884893Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_group_sum 2025-12-04T11:29:50.9886026Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_product 2025-12-04T11:29:50.9887109Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all 2025-12-04T11:29:50.9888286Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_complex 2025-12-04T11:29:50.9889426Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_full_group 2025-12-04T11:29:50.9890673Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_equal_split 2025-12-04T11:29:50.9891983Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_equal_split_full_group_cuda 2025-12-04T11:29:50.9893336Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_equal_split_group 2025-12-04T11:29:50.9894676Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_unequal_split_full_group_cuda 2025-12-04T11:29:50.9896055Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_unequal_split_group_cuda 2025-12-04T11:29:50.9897278Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_cuda 2025-12-04T11:29:50.9898411Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_full_group_cuda 2025-12-04T11:29:50.9899582Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_timeout_group 2025-12-04T11:29:50.9900811Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_batch_isend_irecv_gloo 2025-12-04T11:29:50.9901990Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_compile_static_graph 2025-12-04T11:29:50.9903119Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_device 2025-12-04T11:29:50.9904226Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_logging_data_gpu 2025-12-04T11:29:50.9905437Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_model_diff_shape_across_ranks 2025-12-04T11:29:50.9906787Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_new_tensor_in_fwd_static_graph 2025-12-04T11:29:50.9907998Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_destroy_full_group 2025-12-04T11:29:50.9909077Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_gather 2025-12-04T11:29:50.9910111Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_future 2025-12-04T11:29:50.9911274Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_rank_size_group 2025-12-04T11:29:50.9912425Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_isend_autograd_profiler 2025-12-04T11:29:50.9913714Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_allreduce_hang_wait_all_ranks 2025-12-04T11:29:50.9915015Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_nccl_backend_bool_reduce 2025-12-04T11:29:50.9916238Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_with_group_param 2025-12-04T11:29:50.9917418Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_sum_cuda 2025-12-04T11:29:50.9918560Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_scatter 2025-12-04T11:29:50.9919618Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_scatter_complex 2025-12-04T11:29:50.9920843Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_send_recv_nccl_autograd_profiler 2025-12-04T11:29:50.9921536Z 2025-12-04T11:29:50.9921827Z Running distributed tests for the nccl backend with file init_method 2025-12-04T11:29:50.9922376Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T11:29:50.9923787Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/test_distributed_spawn.py', '--shard-id=4', '--num-shards=7', '-v', '--subprocess', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 11:29:50.987002] 2025-12-04T11:33:35.7404492Z 2025-12-04T11:33:35.7405953Z distributed/test_distributed_spawn 4/7 was successful, full logs can be found in artifacts with path test/test-reports/distributed.test_distributed_spawn_4.7_e14685ff38e73219_.log 2025-12-04T11:33:35.7426333Z Running 39 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallelCPU, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel_SyncBatchNorm_2D_Input, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel_with_amp_and_grad_is_view, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_full_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_simple, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_with_empty, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_into_stack_tensor_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_group_min, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_group_sum, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_product, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_complex, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_full_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_equal_split, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_equal_split_full_group_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_equal_split_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_unequal_split_full_group_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_unequal_split_group_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_full_group_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_timeout_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_batch_isend_irecv_gloo, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_compile_static_graph, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_device, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_logging_data_gpu, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_model_diff_shape_across_ranks, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_new_tensor_in_fwd_static_graph, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_destroy_full_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_gather, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_future, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_rank_size_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_isend_autograd_profiler, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_allreduce_hang_wait_all_ranks, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_nccl_backend_bool_reduce, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_with_group_param, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_sum_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_scatter, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_scatter_complex, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_send_recv_nccl_autograd_profiler 2025-12-04T11:33:35.7446288Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallelCPU 2025-12-04T11:33:35.7447652Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel_SyncBatchNorm_2D_Input 2025-12-04T11:33:35.7449118Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel_with_amp_and_grad_is_view 2025-12-04T11:33:35.7450509Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_full_group 2025-12-04T11:33:35.7451815Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_simple 2025-12-04T11:33:35.7453071Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_with_empty 2025-12-04T11:33:35.7454352Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_into_stack_tensor_cuda 2025-12-04T11:33:35.7455598Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_group_min 2025-12-04T11:33:35.7456919Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_group_sum 2025-12-04T11:33:35.7458075Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_product 2025-12-04T11:33:35.7459186Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all 2025-12-04T11:33:35.7460288Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_complex 2025-12-04T11:33:35.7461500Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_full_group 2025-12-04T11:33:35.7462713Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_equal_split 2025-12-04T11:33:35.7464040Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_equal_split_full_group_cuda 2025-12-04T11:33:35.7465389Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_equal_split_group 2025-12-04T11:33:35.7466790Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_unequal_split_full_group_cuda 2025-12-04T11:33:35.7468170Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_unequal_split_group_cuda 2025-12-04T11:33:35.7469498Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_cuda 2025-12-04T11:33:35.7470682Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_full_group_cuda 2025-12-04T11:33:35.7471878Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_timeout_group 2025-12-04T11:33:35.7473062Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_batch_isend_irecv_gloo 2025-12-04T11:33:35.7474265Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_compile_static_graph 2025-12-04T11:33:35.7475413Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_device 2025-12-04T11:33:35.7476591Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_logging_data_gpu 2025-12-04T11:33:35.7477825Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_model_diff_shape_across_ranks 2025-12-04T11:33:35.7479125Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_new_tensor_in_fwd_static_graph 2025-12-04T11:33:35.7480357Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_destroy_full_group 2025-12-04T11:33:35.7481518Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_gather 2025-12-04T11:33:35.7482573Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_future 2025-12-04T11:33:35.7483684Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_rank_size_group 2025-12-04T11:33:35.7484862Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_isend_autograd_profiler 2025-12-04T11:33:35.7486179Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_allreduce_hang_wait_all_ranks 2025-12-04T11:33:35.7487593Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_nccl_backend_bool_reduce 2025-12-04T11:33:35.7488833Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_with_group_param 2025-12-04T11:33:35.7490026Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_sum_cuda 2025-12-04T11:33:35.7491151Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_scatter 2025-12-04T11:33:35.7492226Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_scatter_complex 2025-12-04T11:33:35.7493412Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_send_recv_nccl_autograd_profiler 2025-12-04T11:33:35.7494113Z 2025-12-04T11:33:35.7494412Z Running distributed tests for the gloo backend with env init_method 2025-12-04T11:33:35.7494971Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T11:33:35.7496301Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/test_distributed_spawn.py', '--shard-id=4', '--num-shards=7', '-v', '--subprocess', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 11:33:35.742475] 2025-12-04T11:36:41.6461636Z 2025-12-04T11:36:41.6462697Z distributed/test_distributed_spawn 4/7 was successful, full logs can be found in artifacts with path test/test-reports/distributed.test_distributed_spawn_4.7_ef28c11c26e8b73d_.log 2025-12-04T11:36:41.6483067Z Running 39 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallelCPU, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel_SyncBatchNorm_2D_Input, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel_with_amp_and_grad_is_view, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_full_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_simple, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_with_empty, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_into_stack_tensor_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_group_min, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_group_sum, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_product, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_complex, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_full_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_equal_split, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_equal_split_full_group_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_equal_split_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_unequal_split_full_group_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_unequal_split_group_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_full_group_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_timeout_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_batch_isend_irecv_gloo, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_compile_static_graph, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_device, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_logging_data_gpu, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_model_diff_shape_across_ranks, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_new_tensor_in_fwd_static_graph, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_destroy_full_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_gather, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_future, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_rank_size_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_isend_autograd_profiler, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_allreduce_hang_wait_all_ranks, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_nccl_backend_bool_reduce, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_with_group_param, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_sum_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_scatter, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_scatter_complex, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_send_recv_nccl_autograd_profiler 2025-12-04T11:36:41.6502834Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallelCPU 2025-12-04T11:36:41.6504184Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel_SyncBatchNorm_2D_Input 2025-12-04T11:36:41.6505644Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel_with_amp_and_grad_is_view 2025-12-04T11:36:41.6507029Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_full_group 2025-12-04T11:36:41.6508286Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_simple 2025-12-04T11:36:41.6509537Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_with_empty 2025-12-04T11:36:41.6510891Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_into_stack_tensor_cuda 2025-12-04T11:36:41.6512140Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_group_min 2025-12-04T11:36:41.6513300Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_group_sum 2025-12-04T11:36:41.6514466Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_product 2025-12-04T11:36:41.6515597Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all 2025-12-04T11:36:41.6516701Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_complex 2025-12-04T11:36:41.6517860Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_full_group 2025-12-04T11:36:41.6519087Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_equal_split 2025-12-04T11:36:41.6520411Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_equal_split_full_group_cuda 2025-12-04T11:36:41.6521923Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_equal_split_group 2025-12-04T11:36:41.6523277Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_unequal_split_full_group_cuda 2025-12-04T11:36:41.6524670Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_unequal_split_group_cuda 2025-12-04T11:36:41.6525905Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_cuda 2025-12-04T11:36:41.6527045Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_full_group_cuda 2025-12-04T11:36:41.6528221Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_timeout_group 2025-12-04T11:36:41.6529397Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_batch_isend_irecv_gloo 2025-12-04T11:36:41.6530641Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_compile_static_graph 2025-12-04T11:36:41.6531766Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_device 2025-12-04T11:36:41.6532867Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_logging_data_gpu 2025-12-04T11:36:41.6534174Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_model_diff_shape_across_ranks 2025-12-04T11:36:41.6535460Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_new_tensor_in_fwd_static_graph 2025-12-04T11:36:41.6536674Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_destroy_full_group 2025-12-04T11:36:41.6537775Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_gather 2025-12-04T11:36:41.6538825Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_future 2025-12-04T11:36:41.6539938Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_rank_size_group 2025-12-04T11:36:41.6541173Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_isend_autograd_profiler 2025-12-04T11:36:41.6542470Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_allreduce_hang_wait_all_ranks 2025-12-04T11:36:41.6543779Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_nccl_backend_bool_reduce 2025-12-04T11:36:41.6545016Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_with_group_param 2025-12-04T11:36:41.6546207Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_sum_cuda 2025-12-04T11:36:41.6547286Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_scatter 2025-12-04T11:36:41.6548357Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_scatter_complex 2025-12-04T11:36:41.6549585Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_send_recv_nccl_autograd_profiler 2025-12-04T11:36:41.6550298Z 2025-12-04T11:36:41.6550586Z Running distributed tests for the gloo backend with file init_method 2025-12-04T11:36:41.6551201Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T11:36:41.6552701Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/test_distributed_spawn.py', '--shard-id=4', '--num-shards=7', '-v', '--subprocess', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 11:36:41.648550] 2025-12-04T11:39:50.8487833Z 2025-12-04T11:39:50.8489165Z distributed/test_distributed_spawn 4/7 was successful, full logs can be found in artifacts with path test/test-reports/distributed.test_distributed_spawn_4.7_2902461f29a63719_.log 2025-12-04T11:39:50.8512257Z Running 39 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallelCPU, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel_SyncBatchNorm_2D_Input, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel_with_amp_and_grad_is_view, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_full_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_simple, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_with_empty, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_into_stack_tensor_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_group_min, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_group_sum, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_product, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_complex, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_full_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_equal_split, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_equal_split_full_group_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_equal_split_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_unequal_split_full_group_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_unequal_split_group_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_full_group_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_timeout_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_batch_isend_irecv_gloo, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_compile_static_graph, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_device, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_logging_data_gpu, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_model_diff_shape_across_ranks, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_new_tensor_in_fwd_static_graph, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_destroy_full_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_gather, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_future, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_rank_size_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_isend_autograd_profiler, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_allreduce_hang_wait_all_ranks, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_nccl_backend_bool_reduce, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_with_group_param, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_sum_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_scatter, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_scatter_complex, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_send_recv_nccl_autograd_profiler 2025-12-04T11:39:50.8531864Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallelCPU 2025-12-04T11:39:50.8533205Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel_SyncBatchNorm_2D_Input 2025-12-04T11:39:50.8534639Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel_with_amp_and_grad_is_view 2025-12-04T11:39:50.8536046Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_full_group 2025-12-04T11:39:50.8537288Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_simple 2025-12-04T11:39:50.8538557Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_with_empty 2025-12-04T11:39:50.8539808Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_into_stack_tensor_cuda 2025-12-04T11:39:50.8541209Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_group_min 2025-12-04T11:39:50.8542356Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_group_sum 2025-12-04T11:39:50.8543488Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_product 2025-12-04T11:39:50.8544589Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all 2025-12-04T11:39:50.8545681Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_complex 2025-12-04T11:39:50.8546815Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_full_group 2025-12-04T11:39:50.8548008Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_equal_split 2025-12-04T11:39:50.8549325Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_equal_split_full_group_cuda 2025-12-04T11:39:50.8550709Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_equal_split_group 2025-12-04T11:39:50.8552047Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_unequal_split_full_group_cuda 2025-12-04T11:39:50.8553416Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_to_all_single_unequal_split_group_cuda 2025-12-04T11:39:50.8554630Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_cuda 2025-12-04T11:39:50.8555748Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_full_group_cuda 2025-12-04T11:39:50.8556914Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_timeout_group 2025-12-04T11:39:50.8558075Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_batch_isend_irecv_gloo 2025-12-04T11:39:50.8559338Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_compile_static_graph 2025-12-04T11:39:50.8560462Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_device 2025-12-04T11:39:50.8561602Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_logging_data_gpu 2025-12-04T11:39:50.8562804Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_model_diff_shape_across_ranks 2025-12-04T11:39:50.8564075Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_new_tensor_in_fwd_static_graph 2025-12-04T11:39:50.8565272Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_destroy_full_group 2025-12-04T11:39:50.8566344Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_gather 2025-12-04T11:39:50.8567375Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_future 2025-12-04T11:39:50.8568462Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_rank_size_group 2025-12-04T11:39:50.8569605Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_isend_autograd_profiler 2025-12-04T11:39:50.8570927Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_allreduce_hang_wait_all_ranks 2025-12-04T11:39:50.8572316Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_nccl_backend_bool_reduce 2025-12-04T11:39:50.8573525Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_with_group_param 2025-12-04T11:39:50.8574703Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_sum_cuda 2025-12-04T11:39:50.8575771Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_scatter 2025-12-04T11:39:50.8576833Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_scatter_complex 2025-12-04T11:39:50.8578017Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_send_recv_nccl_autograd_profiler 2025-12-04T11:39:50.8578722Z 2025-12-04T11:39:50.8579151Z Finished distributed/test_distributed_spawn 4/7 ... [2025-12-04 11:39:50.850509][4973419.700430413], took 13.83min 2025-12-04T11:39:50.8580655Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T11:39:50.8581951Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T11:39:50.8582736Z Running distributed/test_distributed_spawn 7/7 ... [2025-12-04 11:39:50.857648][4973419.707579863] 2025-12-04T11:39:50.8583454Z MPI not available -- MPI backend tests will be skipped 2025-12-04T11:39:50.8584066Z Running distributed tests for the test backend with env init_method 2025-12-04T11:39:50.8584619Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T11:39:50.8588374Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/test_distributed_spawn.py', '--shard-id=7', '--num-shards=7', '-v', '--subprocess', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 11:39:50.858625] 2025-12-04T11:39:52.7697121Z 2025-12-04T11:39:52.7698715Z distributed/test_distributed_spawn 7/7 was successful, full logs can be found in artifacts with path test/test-reports/distributed.test_distributed_spawn_7.7_7feaa3d9828d89f8_.log 2025-12-04T11:39:52.7699831Z Running 0 items in this shard: 2025-12-04T11:39:52.7700846Z 2025-12-04T11:39:52.7706891Z Running distributed tests for the test backend with file init_method 2025-12-04T11:39:52.7707862Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T11:39:52.7713138Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/test_distributed_spawn.py', '--shard-id=7', '--num-shards=7', '-v', '--subprocess', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 11:39:52.771055] 2025-12-04T11:39:54.7178947Z 2025-12-04T11:39:54.7180223Z distributed/test_distributed_spawn 7/7 was successful, full logs can be found in artifacts with path test/test-reports/distributed.test_distributed_spawn_7.7_e980a7469360307d_.log 2025-12-04T11:39:54.7181737Z Running 0 items in this shard: 2025-12-04T11:39:54.7181995Z 2025-12-04T11:39:54.7191092Z Running distributed tests for the nccl backend with env init_method 2025-12-04T11:39:54.7197146Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T11:39:54.7201609Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/test_distributed_spawn.py', '--shard-id=7', '--num-shards=7', '-v', '--subprocess', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 11:39:54.719774] 2025-12-04T11:43:55.4293436Z 2025-12-04T11:43:55.4294959Z distributed/test_distributed_spawn 7/7 was successful, full logs can be found in artifacts with path test/test-reports/distributed.test_distributed_spawn_7.7_088fa0942a5c9fd5_.log 2025-12-04T11:43:55.4314620Z Running 34 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallelCPU_grad_is_view, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel_SyncBatchNorm, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync_allreduce_hook, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync_grad_is_view, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_coalesced_full_group_min, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_coalesced_group_max, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_coalesced_group_sum, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_full_group_sum, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_group_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_batch_isend_irecv_op_err, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_compute_bucket_assignment_by_size_sparse_error_with_logger, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_comm_hook_logging, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_hook_parity_post_localSGD, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_multiple_nested_unused_params_error, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_namedtuple, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_static_graph_nested_types, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_backend, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_data_parallel_params, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_grads_same_across_ranks_with_no_sync, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_invalid_static_graph, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_allreduce_hang, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_gloo_rank_0_timeout, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_nccl_backend_bool_broadcast, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_by_enumeration, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_by_enumeration_negative_input_rank, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_overlap_not_allowed, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_post_localSGD_optimizer_parity_with_hierarchical_sgd, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_full_group_product, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_scatter_v_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_sparse_all_reduce_sum, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_undefined_grad_parity_unused_parameters 2025-12-04T11:43:55.4332432Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallelCPU_grad_is_view 2025-12-04T11:43:55.4333841Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel_SyncBatchNorm 2025-12-04T11:43:55.4335237Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync 2025-12-04T11:43:55.4336532Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync_allreduce_hook 2025-12-04T11:43:55.4337900Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync_grad_is_view 2025-12-04T11:43:55.4339179Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_group 2025-12-04T11:43:55.4340338Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_group 2025-12-04T11:43:55.4341589Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_coalesced_full_group_min 2025-12-04T11:43:55.4342875Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_coalesced_group_max 2025-12-04T11:43:55.4344112Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_coalesced_group_sum 2025-12-04T11:43:55.4345327Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_full_group_sum 2025-12-04T11:43:55.4346493Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_group_cuda 2025-12-04T11:43:55.4347657Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_batch_isend_irecv_op_err 2025-12-04T11:43:55.4348990Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_compute_bucket_assignment_by_size_sparse_error_with_logger 2025-12-04T11:43:55.4350322Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_comm_hook_logging 2025-12-04T11:43:55.4351553Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_hook_parity_post_localSGD 2025-12-04T11:43:55.4352831Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_multiple_nested_unused_params_error 2025-12-04T11:43:55.4354144Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_namedtuple 2025-12-04T11:43:55.4355320Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_static_graph_nested_types 2025-12-04T11:43:55.4356468Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_backend 2025-12-04T11:43:55.4357591Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_data_parallel_params 2025-12-04T11:43:55.4358844Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_grads_same_across_ranks_with_no_sync 2025-12-04T11:43:55.4360070Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_invalid_static_graph 2025-12-04T11:43:55.4361334Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_allreduce_hang 2025-12-04T11:43:55.4362617Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_gloo_rank_0_timeout 2025-12-04T11:43:55.4363873Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_nccl_backend_bool_broadcast 2025-12-04T11:43:55.4365082Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_by_enumeration 2025-12-04T11:43:55.4366501Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_by_enumeration_negative_input_rank 2025-12-04T11:43:55.4367844Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_overlap_not_allowed 2025-12-04T11:43:55.4369213Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_post_localSGD_optimizer_parity_with_hierarchical_sgd 2025-12-04T11:43:55.4370536Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_full_group_product 2025-12-04T11:43:55.4371748Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_scatter_v_cuda 2025-12-04T11:43:55.4372898Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_sparse_all_reduce_sum 2025-12-04T11:43:55.4374135Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_undefined_grad_parity_unused_parameters 2025-12-04T11:43:55.4374857Z 2025-12-04T11:43:55.4375148Z Running distributed tests for the nccl backend with file init_method 2025-12-04T11:43:55.4375703Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T11:43:55.4377094Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/test_distributed_spawn.py', '--shard-id=7', '--num-shards=7', '-v', '--subprocess', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 11:43:55.431374] 2025-12-04T11:47:54.8403162Z 2025-12-04T11:47:54.8404570Z distributed/test_distributed_spawn 7/7 was successful, full logs can be found in artifacts with path test/test-reports/distributed.test_distributed_spawn_7.7_1ad13ea334ac0730_.log 2025-12-04T11:47:54.8423231Z Running 34 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallelCPU_grad_is_view, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel_SyncBatchNorm, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync_allreduce_hook, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync_grad_is_view, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_coalesced_full_group_min, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_coalesced_group_max, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_coalesced_group_sum, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_full_group_sum, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_group_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_batch_isend_irecv_op_err, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_compute_bucket_assignment_by_size_sparse_error_with_logger, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_comm_hook_logging, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_hook_parity_post_localSGD, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_multiple_nested_unused_params_error, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_namedtuple, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_static_graph_nested_types, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_backend, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_data_parallel_params, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_grads_same_across_ranks_with_no_sync, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_invalid_static_graph, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_allreduce_hang, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_gloo_rank_0_timeout, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_nccl_backend_bool_broadcast, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_by_enumeration, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_by_enumeration_negative_input_rank, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_overlap_not_allowed, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_post_localSGD_optimizer_parity_with_hierarchical_sgd, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_full_group_product, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_scatter_v_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_sparse_all_reduce_sum, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_undefined_grad_parity_unused_parameters 2025-12-04T11:47:54.8441174Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallelCPU_grad_is_view 2025-12-04T11:47:54.8442533Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel_SyncBatchNorm 2025-12-04T11:47:54.8443816Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync 2025-12-04T11:47:54.8445123Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync_allreduce_hook 2025-12-04T11:47:54.8446476Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync_grad_is_view 2025-12-04T11:47:54.8447861Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_group 2025-12-04T11:47:54.8449033Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_group 2025-12-04T11:47:54.8450239Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_coalesced_full_group_min 2025-12-04T11:47:54.8451566Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_coalesced_group_max 2025-12-04T11:47:54.8452814Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_coalesced_group_sum 2025-12-04T11:47:54.8454038Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_full_group_sum 2025-12-04T11:47:54.8455213Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_group_cuda 2025-12-04T11:47:54.8456376Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_batch_isend_irecv_op_err 2025-12-04T11:47:54.8457719Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_compute_bucket_assignment_by_size_sparse_error_with_logger 2025-12-04T11:47:54.8459055Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_comm_hook_logging 2025-12-04T11:47:54.8460351Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_hook_parity_post_localSGD 2025-12-04T11:47:54.8461734Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_multiple_nested_unused_params_error 2025-12-04T11:47:54.8462950Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_namedtuple 2025-12-04T11:47:54.8464107Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_static_graph_nested_types 2025-12-04T11:47:54.8465262Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_backend 2025-12-04T11:47:54.8466383Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_data_parallel_params 2025-12-04T11:47:54.8467629Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_grads_same_across_ranks_with_no_sync 2025-12-04T11:47:54.8468855Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_invalid_static_graph 2025-12-04T11:47:54.8470058Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_allreduce_hang 2025-12-04T11:47:54.8471421Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_gloo_rank_0_timeout 2025-12-04T11:47:54.8472679Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_nccl_backend_bool_broadcast 2025-12-04T11:47:54.8473881Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_by_enumeration 2025-12-04T11:47:54.8475195Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_by_enumeration_negative_input_rank 2025-12-04T11:47:54.8476561Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_overlap_not_allowed 2025-12-04T11:47:54.8477917Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_post_localSGD_optimizer_parity_with_hierarchical_sgd 2025-12-04T11:47:54.8479338Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_full_group_product 2025-12-04T11:47:54.8480513Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_scatter_v_cuda 2025-12-04T11:47:54.8481717Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_sparse_all_reduce_sum 2025-12-04T11:47:54.8482986Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_undefined_grad_parity_unused_parameters 2025-12-04T11:47:54.8483717Z 2025-12-04T11:47:54.8484016Z Running distributed tests for the gloo backend with env init_method 2025-12-04T11:47:54.8484569Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T11:47:54.8485969Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/test_distributed_spawn.py', '--shard-id=7', '--num-shards=7', '-v', '--subprocess', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 11:47:54.842337] 2025-12-04T11:51:13.5705572Z 2025-12-04T11:51:13.5706866Z distributed/test_distributed_spawn 7/7 was successful, full logs can be found in artifacts with path test/test-reports/distributed.test_distributed_spawn_7.7_888135fea07e3277_.log 2025-12-04T11:51:13.5725821Z Running 34 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallelCPU_grad_is_view, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel_SyncBatchNorm, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync_allreduce_hook, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync_grad_is_view, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_coalesced_full_group_min, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_coalesced_group_max, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_coalesced_group_sum, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_full_group_sum, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_group_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_batch_isend_irecv_op_err, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_compute_bucket_assignment_by_size_sparse_error_with_logger, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_comm_hook_logging, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_hook_parity_post_localSGD, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_multiple_nested_unused_params_error, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_namedtuple, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_static_graph_nested_types, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_backend, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_data_parallel_params, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_grads_same_across_ranks_with_no_sync, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_invalid_static_graph, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_allreduce_hang, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_gloo_rank_0_timeout, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_nccl_backend_bool_broadcast, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_by_enumeration, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_by_enumeration_negative_input_rank, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_overlap_not_allowed, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_post_localSGD_optimizer_parity_with_hierarchical_sgd, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_full_group_product, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_scatter_v_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_sparse_all_reduce_sum, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_undefined_grad_parity_unused_parameters 2025-12-04T11:51:13.5743995Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallelCPU_grad_is_view 2025-12-04T11:51:13.5745371Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel_SyncBatchNorm 2025-12-04T11:51:13.5746671Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync 2025-12-04T11:51:13.5748097Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync_allreduce_hook 2025-12-04T11:51:13.5749458Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync_grad_is_view 2025-12-04T11:51:13.5750784Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_group 2025-12-04T11:51:13.5751958Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_group 2025-12-04T11:51:13.5753161Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_coalesced_full_group_min 2025-12-04T11:51:13.5754431Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_coalesced_group_max 2025-12-04T11:51:13.5755677Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_coalesced_group_sum 2025-12-04T11:51:13.5756902Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_full_group_sum 2025-12-04T11:51:13.5758073Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_group_cuda 2025-12-04T11:51:13.5759234Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_batch_isend_irecv_op_err 2025-12-04T11:51:13.5760570Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_compute_bucket_assignment_by_size_sparse_error_with_logger 2025-12-04T11:51:13.5761970Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_comm_hook_logging 2025-12-04T11:51:13.5763163Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_hook_parity_post_localSGD 2025-12-04T11:51:13.5764447Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_multiple_nested_unused_params_error 2025-12-04T11:51:13.5765802Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_namedtuple 2025-12-04T11:51:13.5767248Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_static_graph_nested_types 2025-12-04T11:51:13.5768689Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_backend 2025-12-04T11:51:13.5769823Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_data_parallel_params 2025-12-04T11:51:13.5771161Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_grads_same_across_ranks_with_no_sync 2025-12-04T11:51:13.5772394Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_invalid_static_graph 2025-12-04T11:51:13.5773607Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_allreduce_hang 2025-12-04T11:51:13.5774916Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_gloo_rank_0_timeout 2025-12-04T11:51:13.5776186Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_nccl_backend_bool_broadcast 2025-12-04T11:51:13.5777408Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_by_enumeration 2025-12-04T11:51:13.5778725Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_by_enumeration_negative_input_rank 2025-12-04T11:51:13.5780178Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_overlap_not_allowed 2025-12-04T11:51:13.5781592Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_post_localSGD_optimizer_parity_with_hierarchical_sgd 2025-12-04T11:51:13.5782919Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_full_group_product 2025-12-04T11:51:13.5784098Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_scatter_v_cuda 2025-12-04T11:51:13.5785253Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_sparse_all_reduce_sum 2025-12-04T11:51:13.5786496Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_undefined_grad_parity_unused_parameters 2025-12-04T11:51:13.5787229Z 2025-12-04T11:51:13.5787522Z Running distributed tests for the gloo backend with file init_method 2025-12-04T11:51:13.5788084Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T11:51:13.5789500Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/test_distributed_spawn.py', '--shard-id=7', '--num-shards=7', '-v', '--subprocess', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 11:51:13.572952] 2025-12-04T11:54:32.2059391Z 2025-12-04T11:54:32.2060904Z distributed/test_distributed_spawn 7/7 was successful, full logs can be found in artifacts with path test/test-reports/distributed.test_distributed_spawn_7.7_1cd88d93572f4e40_.log 2025-12-04T11:54:32.2079349Z Running 34 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallelCPU_grad_is_view, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel_SyncBatchNorm, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync_allreduce_hook, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync_grad_is_view, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_group, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_coalesced_full_group_min, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_coalesced_group_max, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_coalesced_group_sum, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_full_group_sum, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_group_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_batch_isend_irecv_op_err, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_compute_bucket_assignment_by_size_sparse_error_with_logger, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_comm_hook_logging, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_hook_parity_post_localSGD, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_multiple_nested_unused_params_error, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_namedtuple, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_static_graph_nested_types, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_backend, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_data_parallel_params, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_grads_same_across_ranks_with_no_sync, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_invalid_static_graph, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_allreduce_hang, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_gloo_rank_0_timeout, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_nccl_backend_bool_broadcast, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_by_enumeration, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_by_enumeration_negative_input_rank, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_overlap_not_allowed, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_post_localSGD_optimizer_parity_with_hierarchical_sgd, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_full_group_product, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_scatter_v_cuda, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_sparse_all_reduce_sum, test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_undefined_grad_parity_unused_parameters 2025-12-04T11:54:32.2097390Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallelCPU_grad_is_view 2025-12-04T11:54:32.2098804Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_DistributedDataParallel_SyncBatchNorm 2025-12-04T11:54:32.2100094Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync 2025-12-04T11:54:32.2101431Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync_allreduce_hook 2025-12-04T11:54:32.2102789Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_accumulate_gradients_no_sync_grad_is_view 2025-12-04T11:54:32.2104058Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_coalesced_group 2025-12-04T11:54:32.2105337Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_gather_group 2025-12-04T11:54:32.2106591Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_coalesced_full_group_min 2025-12-04T11:54:32.2107867Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_coalesced_group_max 2025-12-04T11:54:32.2109098Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_coalesced_group_sum 2025-12-04T11:54:32.2110309Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_all_reduce_full_group_sum 2025-12-04T11:54:32.2111570Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_barrier_group_cuda 2025-12-04T11:54:32.2127981Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_batch_isend_irecv_op_err 2025-12-04T11:54:32.2129342Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_compute_bucket_assignment_by_size_sparse_error_with_logger 2025-12-04T11:54:32.2130760Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_comm_hook_logging 2025-12-04T11:54:32.2131957Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_hook_parity_post_localSGD 2025-12-04T11:54:32.2133430Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_multiple_nested_unused_params_error 2025-12-04T11:54:32.2134658Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_namedtuple 2025-12-04T11:54:32.2135834Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_static_graph_nested_types 2025-12-04T11:54:32.2136997Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_backend 2025-12-04T11:54:32.2138127Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_get_data_parallel_params 2025-12-04T11:54:32.2139387Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_grads_same_across_ranks_with_no_sync 2025-12-04T11:54:32.2140662Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_invalid_static_graph 2025-12-04T11:54:32.2141879Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_allreduce_hang 2025-12-04T11:54:32.2143167Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_gloo_rank_0_timeout 2025-12-04T11:54:32.2144434Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_nccl_backend_bool_broadcast 2025-12-04T11:54:32.2145649Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_by_enumeration 2025-12-04T11:54:32.2146966Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_by_enumeration_negative_input_rank 2025-12-04T11:54:32.2148317Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_overlap_not_allowed 2025-12-04T11:54:32.2149691Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_post_localSGD_optimizer_parity_with_hierarchical_sgd 2025-12-04T11:54:32.2151071Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_full_group_product 2025-12-04T11:54:32.2152347Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_scatter_v_cuda 2025-12-04T11:54:32.2153508Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_sparse_all_reduce_sum 2025-12-04T11:54:32.2154757Z Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_undefined_grad_parity_unused_parameters 2025-12-04T11:54:32.2155490Z 2025-12-04T11:54:32.2155931Z Finished distributed/test_distributed_spawn 7/7 ... [2025-12-04 11:54:32.207361][4974301.057291815], took 14.69min 2025-12-04T11:54:32.2157387Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T11:54:32.2158681Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T11:54:32.2159418Z GITHUB_RUN_ID, GITHUB_RUN_ATTEMPT, or ARTIFACTS_FILE_SUFFIX not set, not uploading 2025-12-04T11:54:32.2160024Z Uploading artifacts took 0.00 seconds 2025-12-04T11:54:32.2160780Z Running distributed/fsdp/test_fsdp_sharded_grad_scaler 1/1 ... [2025-12-04 11:54:32.214779][4974301.064712685] 2025-12-04T11:54:32.2161494Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T11:54:32.2162875Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/fsdp/test_fsdp_sharded_grad_scaler.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 11:54:32.215247] 2025-12-04T11:57:54.8892453Z 2025-12-04T11:57:54.8897205Z distributed/fsdp/test_fsdp_sharded_grad_scaler 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.fsdp.test_fsdp_sharded_grad_scaler_1.1_43f5ff70528cadd2_.log 2025-12-04T11:57:54.8914019Z Running 20 items in this shard: test/distributed/fsdp/test_fsdp_sharded_grad_scaler.py::TestShardGradScaler::test_grad_scaling, test/distributed/fsdp/test_fsdp_sharded_grad_scaler.py::TestShardGradScaler::test_inf_gradients_skip_optim_step, test/distributed/fsdp/test_fsdp_sharded_grad_scaler.py::TestShardGradScaler::test_scaling_unscaling_sparse, test/distributed/fsdp/test_fsdp_sharded_grad_scaler.py::TestShardedGradScalerParityWithDDP::test_fsdp_ddp_parity_with_grad_scaler_offload_false_none_mixed_precision_none, test/distributed/fsdp/test_fsdp_sharded_grad_scaler.py::TestShardedGradScalerParityWithDDP::test_fsdp_ddp_parity_with_grad_scaler_offload_false_none_mixed_precision_use_orig_params, test/distributed/fsdp/test_fsdp_sharded_grad_scaler.py::TestShardedGradScalerParityWithDDP::test_fsdp_ddp_parity_with_grad_scaler_offload_false_none_none_none, test/distributed/fsdp/test_fsdp_sharded_grad_scaler.py::TestShardedGradScalerParityWithDDP::test_fsdp_ddp_parity_with_grad_scaler_offload_false_none_none_use_orig_params, test/distributed/fsdp/test_fsdp_sharded_grad_scaler.py::TestShardedGradScalerParityWithDDP::test_fsdp_ddp_parity_with_grad_scaler_offload_false_shard_grad_op_mixed_precision_none, test/distributed/fsdp/test_fsdp_sharded_grad_scaler.py::TestShardedGradScalerParityWithDDP::test_fsdp_ddp_parity_with_grad_scaler_offload_false_shard_grad_op_mixed_precision_use_orig_params, test/distributed/fsdp/test_fsdp_sharded_grad_scaler.py::TestShardedGradScalerParityWithDDP::test_fsdp_ddp_parity_with_grad_scaler_offload_false_shard_grad_op_none_none, test/distributed/fsdp/test_fsdp_sharded_grad_scaler.py::TestShardedGradScalerParityWithDDP::test_fsdp_ddp_parity_with_grad_scaler_offload_false_shard_grad_op_none_use_orig_params, test/distributed/fsdp/test_fsdp_sharded_grad_scaler.py::TestShardedGradScalerParityWithDDP::test_fsdp_ddp_parity_with_grad_scaler_offload_true_none_mixed_precision_none, test/distributed/fsdp/test_fsdp_sharded_grad_scaler.py::TestShardedGradScalerParityWithDDP::test_fsdp_ddp_parity_with_grad_scaler_offload_true_none_mixed_precision_use_orig_params, test/distributed/fsdp/test_fsdp_sharded_grad_scaler.py::TestShardedGradScalerParityWithDDP::test_fsdp_ddp_parity_with_grad_scaler_offload_true_none_none_none, test/distributed/fsdp/test_fsdp_sharded_grad_scaler.py::TestShardedGradScalerParityWithDDP::test_fsdp_ddp_parity_with_grad_scaler_offload_true_none_none_use_orig_params, test/distributed/fsdp/test_fsdp_sharded_grad_scaler.py::TestShardedGradScalerParityWithDDP::test_fsdp_ddp_parity_with_grad_scaler_offload_true_shard_grad_op_mixed_precision_none, test/distributed/fsdp/test_fsdp_sharded_grad_scaler.py::TestShardedGradScalerParityWithDDP::test_fsdp_ddp_parity_with_grad_scaler_offload_true_shard_grad_op_mixed_precision_use_orig_params, test/distributed/fsdp/test_fsdp_sharded_grad_scaler.py::TestShardedGradScalerParityWithDDP::test_fsdp_ddp_parity_with_grad_scaler_offload_true_shard_grad_op_none_none, test/distributed/fsdp/test_fsdp_sharded_grad_scaler.py::TestShardedGradScalerParityWithDDP::test_fsdp_ddp_parity_with_grad_scaler_offload_true_shard_grad_op_none_use_orig_params, test/distributed/fsdp/test_fsdp_sharded_grad_scaler.py::TestShardedGradScalerParityWithDDP::test_sharded_grad_scaler_found_inf 2025-12-04T11:57:54.8929136Z 2025-12-04T11:57:54.8929628Z Finished distributed/fsdp/test_fsdp_sharded_grad_scaler 1/1 ... [2025-12-04 11:57:54.889241][4974503.73917076], took 3.38min 2025-12-04T11:57:54.8937071Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T11:57:54.8963885Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T11:57:54.8970697Z Running distributed/_shard/sharding_plan/test_sharding_plan 1/1 ... [2025-12-04 11:57:54.896911][4974503.746844017] 2025-12-04T11:57:54.8971452Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T11:57:54.8976299Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/_shard/sharding_plan/test_sharding_plan.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 11:57:54.897392] 2025-12-04T11:58:16.5021790Z 2025-12-04T11:58:16.5023220Z distributed/_shard/sharding_plan/test_sharding_plan 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed._shard.sharding_plan.test_sharding_plan_1.1_2eb25bb4287c3eb8_.log 2025-12-04T11:58:16.5025967Z Running 3 items in this shard: test/distributed/_shard/sharding_plan/test_sharding_plan.py::TestShardingPlan::test_custom_sharding_planner, test/distributed/_shard/sharding_plan/test_sharding_plan.py::TestShardingPlan::test_shard_module_sub_process_group, test/distributed/_shard/sharding_plan/test_sharding_plan.py::TestShardingPlan::test_sharding_plan_errors 2025-12-04T11:58:16.5027655Z 2025-12-04T11:58:16.5028180Z Finished distributed/_shard/sharding_plan/test_sharding_plan 1/1 ... [2025-12-04 11:58:16.501780][4974525.351714151], took 0.36min 2025-12-04T11:58:16.5043012Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T11:58:16.5071017Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T11:58:16.5077886Z Running distributed/fsdp/test_fsdp_comm 1/1 ... [2025-12-04 11:58:16.507464][4974525.357397342] 2025-12-04T11:58:16.5078579Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T11:58:16.5082075Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/fsdp/test_fsdp_comm.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 11:58:16.507962] 2025-12-04T12:05:39.6556069Z 2025-12-04T12:05:39.6556811Z PRINTING LOG FILE of distributed/fsdp/test_fsdp_comm 1/1 (test/test-reports/distributed.fsdp.test_fsdp_comm_1.1_4659699ad34baeee_.log) 2025-12-04T12:05:39.6558571Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-94da4a4e11b23015.xml 2025-12-04T12:05:39.6559135Z ============================= test session starts ============================== 2025-12-04T12:05:39.6559576Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:05:39.6559952Z cachedir: .pytest_cache 2025-12-04T12:05:39.6560404Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:05:39.6561059Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:05:39.6561337Z configfile: pytest.ini 2025-12-04T12:05:39.6561805Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:05:39.6562288Z collecting ... collected 10 items 2025-12-04T12:05:39.6562571Z stepcurrent: Cannot find last run test, not skipping 2025-12-04T12:05:39.6566387Z Running 10 items in this shard: test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda, test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda, test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda, test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda, test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda, test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda, test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda, test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda, test/distributed/fsdp/test_fsdp_comm.py::TestExplicitUnshardCUDA::test_unshard_async_use_orig_params_False_cuda, test/distributed/fsdp/test_fsdp_comm.py::TestExplicitUnshardCUDA::test_unshard_async_use_orig_params_True_cuda 2025-12-04T12:05:39.6569921Z 2025-12-04T12:05:39.6570433Z distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda I1204 11:58:18.329000 415683 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 415752 2025-12-04T12:05:39.6571278Z I1204 11:58:18.330000 415683 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 415753 2025-12-04T12:05:39.6571792Z I1204 11:58:18.330000 415683 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 415754 2025-12-04T12:05:39.6572298Z I1204 11:58:18.331000 415683 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 415755 2025-12-04T12:05:39.6573114Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.6573769Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.6574625Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.6575552Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.6576210Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.6577026Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.6577858Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.6578728Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.6580025Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.6581554Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.6583441Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.6585441Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.6586916Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.6588379Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.6590233Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.6592211Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.6593017Z [rank0]:E1204 11:58:28.598000 415752 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.6594159Z [rank0]:E1204 11:58:28.598000 415752 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.6595803Z [rank0]:E1204 11:58:28.598000 415752 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.6597389Z [rank0]:E1204 11:58:28.598000 415752 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.6599009Z [rank0]:E1204 11:58:28.598000 415752 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.6600519Z [rank0]:E1204 11:58:28.598000 415752 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.6602032Z [rank0]:E1204 11:58:28.598000 415752 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.6603657Z [rank0]:E1204 11:58:28.598000 415752 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.6605185Z [rank0]:E1204 11:58:28.598000 415752 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.6606703Z [rank0]:E1204 11:58:28.598000 415752 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.6608234Z [rank0]:E1204 11:58:28.598000 415752 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.6609722Z [rank0]:E1204 11:58:28.598000 415752 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.6611277Z [rank0]:E1204 11:58:28.598000 415752 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.6612845Z [rank0]:E1204 11:58:28.598000 415752 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.6615192Z [rank0]:E1204 11:58:28.598000 415752 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 0. CUDA driver allocated memory was 2459959296 and is now 3307208704. 2025-12-04T12:05:39.6617479Z [rank0]:E1204 11:58:28.598000 415752 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.6618640Z [rank0]:E1204 11:58:28.598000 415752 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.6620778Z [rank0]:E1204 11:58:28.598000 415752 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.6622591Z [rank0]:E1204 11:58:28.598000 415752 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.6623810Z [rank0]:E1204 11:58:28.598000 415752 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.6625165Z [rank0]:E1204 11:58:28.598000 415752 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:05:39.6625960Z dist init r=0, world=4 2025-12-04T12:05:39.6626634Z [rank1]:E1204 11:58:28.624000 415753 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.6627740Z [rank1]:E1204 11:58:28.624000 415753 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.6629329Z [rank1]:E1204 11:58:28.624000 415753 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.6630951Z [rank1]:E1204 11:58:28.624000 415753 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.6632528Z [rank1]:E1204 11:58:28.624000 415753 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.6634086Z [rank1]:E1204 11:58:28.624000 415753 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.6635545Z [rank1]:E1204 11:58:28.624000 415753 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.6637055Z [rank1]:E1204 11:58:28.624000 415753 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.6638577Z [rank1]:E1204 11:58:28.624000 415753 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.6640091Z [rank1]:E1204 11:58:28.624000 415753 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.6641667Z [rank1]:E1204 11:58:28.624000 415753 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.6643146Z [rank1]:E1204 11:58:28.624000 415753 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.6644638Z [rank1]:E1204 11:58:28.624000 415753 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.6646270Z [rank1]:E1204 11:58:28.624000 415753 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.6648597Z [rank1]:E1204 11:58:28.624000 415753 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 1. CUDA driver allocated memory was 2317352960 and is now 3164602368. 2025-12-04T12:05:39.6650817Z [rank1]:E1204 11:58:28.624000 415753 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.6651965Z [rank1]:E1204 11:58:28.624000 415753 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.6654043Z [rank1]:E1204 11:58:28.624000 415753 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.6655838Z [rank1]:E1204 11:58:28.624000 415753 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.6657034Z [rank1]:E1204 11:58:28.624000 415753 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.6658387Z [rank1]:E1204 11:58:28.624000 415753 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:05:39.6659171Z dist init r=1, world=4 2025-12-04T12:05:39.6659829Z [rank3]:E1204 11:58:28.667000 415755 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.6661034Z [rank3]:E1204 11:58:28.667000 415755 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.6662636Z [rank3]:E1204 11:58:28.667000 415755 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.6664315Z [rank3]:E1204 11:58:28.667000 415755 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.6665874Z [rank3]:E1204 11:58:28.667000 415755 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.6667343Z [rank3]:E1204 11:58:28.667000 415755 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.6668792Z [rank3]:E1204 11:58:28.667000 415755 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.6670309Z [rank3]:E1204 11:58:28.667000 415755 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.6671889Z [rank3]:E1204 11:58:28.667000 415755 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.6673400Z [rank3]:E1204 11:58:28.667000 415755 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.6674918Z [rank3]:E1204 11:58:28.667000 415755 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.6676505Z [rank3]:E1204 11:58:28.667000 415755 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.6677994Z [rank3]:E1204 11:58:28.667000 415755 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.6679515Z [rank3]:E1204 11:58:28.667000 415755 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.6681863Z [rank3]:E1204 11:58:28.667000 415755 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 3. CUDA driver allocated memory was 2250244096 and is now 3097493504. 2025-12-04T12:05:39.6684046Z [rank3]:E1204 11:58:28.667000 415755 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.6685186Z [rank3]:E1204 11:58:28.667000 415755 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.6687278Z [rank3]:E1204 11:58:28.667000 415755 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.6689075Z [rank3]:E1204 11:58:28.667000 415755 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.6690270Z [rank3]:E1204 11:58:28.667000 415755 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.6691677Z [rank3]:E1204 11:58:28.667000 415755 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:05:39.6692458Z dist init r=3, world=4 2025-12-04T12:05:39.6693115Z [rank2]:E1204 11:58:28.695000 415754 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.6694299Z [rank2]:E1204 11:58:28.695000 415754 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.6695897Z [rank2]:E1204 11:58:28.695000 415754 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.6697466Z [rank2]:E1204 11:58:28.695000 415754 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.6699024Z [rank2]:E1204 11:58:28.695000 415754 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.6700490Z [rank2]:E1204 11:58:28.695000 415754 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.6702001Z [rank2]:E1204 11:58:28.695000 415754 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.6703516Z [rank2]:E1204 11:58:28.695000 415754 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.6705033Z [rank2]:E1204 11:58:28.695000 415754 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.6706632Z [rank2]:E1204 11:58:28.695000 415754 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.6708136Z [rank2]:E1204 11:58:28.695000 415754 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.6709618Z [rank2]:E1204 11:58:28.695000 415754 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.6711141Z [rank2]:E1204 11:58:28.695000 415754 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.6712671Z [rank2]:E1204 11:58:28.695000 415754 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.6714990Z [rank2]:E1204 11:58:28.695000 415754 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 2. CUDA driver allocated memory was 2300575744 and is now 3147825152. 2025-12-04T12:05:39.6717179Z [rank2]:E1204 11:58:28.695000 415754 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.6718320Z [rank2]:E1204 11:58:28.695000 415754 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.6720393Z [rank2]:E1204 11:58:28.695000 415754 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.6722234Z [rank2]:E1204 11:58:28.695000 415754 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.6723418Z [rank2]:E1204 11:58:28.695000 415754 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.6724856Z [rank2]:E1204 11:58:28.695000 415754 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:05:39.6725643Z dist init r=2, world=4 2025-12-04T12:05:39.6726970Z [rank0]:[W1204 11:58:28.638228689 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:05:39.6728313Z FAILED [12.2273s] [ 10%] 2025-12-04T12:05:39.6728535Z 2025-12-04T12:05:39.6728728Z =================================== FAILURES =================================== 2025-12-04T12:05:39.6729498Z _ TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda _ 2025-12-04T12:05:39.6730217Z Traceback (most recent call last): 2025-12-04T12:05:39.6731078Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:05:39.6731885Z self._join_processes(fn) 2025-12-04T12:05:39.6732696Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:05:39.6733570Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:05:39.6734445Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:05:39.6735290Z raise RuntimeError(error) 2025-12-04T12:05:39.6735787Z RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.6736397Z Traceback (most recent call last): 2025-12-04T12:05:39.6737182Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.6737969Z getattr(self, test_name)() 2025-12-04T12:05:39.6738725Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.6739482Z fn() 2025-12-04T12:05:39.6740152Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.6740960Z method(*args, **kwargs) 2025-12-04T12:05:39.6741688Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.6742440Z method(*args, **kwargs) 2025-12-04T12:05:39.6743157Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.6743899Z with policy(): 2025-12-04T12:05:39.6744593Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.6745341Z raise RuntimeError(msg) 2025-12-04T12:05:39.6746853Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 0. CUDA driver allocated memory was 2459959296 and is now 3307208704. 2025-12-04T12:05:39.6748241Z 2025-12-04T12:05:39.6748490Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.6749753Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.6750834Z 2025-12-04T12:05:39.6751125Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.6751536Z 2025-12-04T12:05:39.6751543Z 2025-12-04T12:05:39.6751808Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:05:39.6752475Z Process 0 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:05:39.6753744Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-94da4a4e11b23015.xml - 2025-12-04T12:05:39.6754828Z =========================== short test summary info ============================ 2025-12-04T12:05:39.6756088Z FAILED [12.2273s] distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda - RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.6757278Z Traceback (most recent call last): 2025-12-04T12:05:39.6758089Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.6758883Z getattr(self, test_name)() 2025-12-04T12:05:39.6759636Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.6760393Z fn() 2025-12-04T12:05:39.6761101Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.6761853Z method(*args, **kwargs) 2025-12-04T12:05:39.6762569Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.6763310Z method(*args, **kwargs) 2025-12-04T12:05:39.6764019Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.6764750Z with policy(): 2025-12-04T12:05:39.6765526Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.6766283Z raise RuntimeError(msg) 2025-12-04T12:05:39.6767809Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 0. CUDA driver allocated memory was 2459959296 and is now 3307208704. 2025-12-04T12:05:39.6769199Z 2025-12-04T12:05:39.6769448Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.6770747Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.6771761Z 2025-12-04T12:05:39.6772053Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.6772675Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:05:39.6773194Z ============================== 1 failed in 12.24s ============================== 2025-12-04T12:05:39.6773627Z Got exit code 1 2025-12-04T12:05:39.6773943Z Retrying single test... 2025-12-04T12:05:39.6774776Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-a36f8f08664e71eb.xml 2025-12-04T12:05:39.6775704Z ============================= test session starts ============================== 2025-12-04T12:05:39.6776394Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:05:39.6777008Z cachedir: .pytest_cache 2025-12-04T12:05:39.6777731Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:05:39.6778518Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:05:39.6778915Z configfile: pytest.ini 2025-12-04T12:05:39.6779647Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:05:39.6780528Z collecting ... collected 10 items / 9 deselected / 1 selected 2025-12-04T12:05:39.6781878Z stepcurrent: skipping 0 already run items. Running only test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.6782997Z Running 1 items in this shard 2025-12-04T12:05:39.6783230Z 2025-12-04T12:05:39.6784369Z distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda I1204 11:58:33.271000 416085 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 416154 2025-12-04T12:05:39.6786118Z I1204 11:58:33.272000 416085 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 416155 2025-12-04T12:05:39.6787246Z I1204 11:58:33.273000 416085 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 416156 2025-12-04T12:05:39.6788357Z I1204 11:58:33.273000 416085 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 416157 2025-12-04T12:05:39.6790154Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.6791653Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.6793545Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.6795550Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.6797030Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.6798451Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.6799858Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.6801309Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.6803183Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.6805101Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.6807015Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.6808936Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.6810413Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.6811891Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.6813896Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.6815792Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.6816569Z [rank2]:E1204 11:58:43.421000 416156 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.6817697Z [rank2]:E1204 11:58:43.421000 416156 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.6819311Z [rank2]:E1204 11:58:43.421000 416156 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.6820953Z [rank2]:E1204 11:58:43.421000 416156 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.6822519Z [rank2]:E1204 11:58:43.421000 416156 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.6823987Z [rank2]:E1204 11:58:43.421000 416156 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.6825431Z [rank2]:E1204 11:58:43.421000 416156 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.6827041Z [rank2]:E1204 11:58:43.421000 416156 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.6828566Z [rank2]:E1204 11:58:43.421000 416156 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.6830076Z [rank2]:E1204 11:58:43.421000 416156 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.6831625Z [rank2]:E1204 11:58:43.421000 416156 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.6833109Z [rank2]:E1204 11:58:43.421000 416156 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.6834601Z [rank2]:E1204 11:58:43.421000 416156 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.6836135Z [rank2]:E1204 11:58:43.421000 416156 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.6838449Z [rank2]:E1204 11:58:43.421000 416156 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 2. CUDA driver allocated memory was 2300575744 and is now 3147825152. 2025-12-04T12:05:39.6840685Z [rank2]:E1204 11:58:43.421000 416156 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.6841830Z [rank2]:E1204 11:58:43.421000 416156 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.6843987Z [rank2]:E1204 11:58:43.421000 416156 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.6845782Z [rank2]:E1204 11:58:43.421000 416156 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.6846976Z [rank2]:E1204 11:58:43.421000 416156 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.6848335Z [rank2]:E1204 11:58:43.421000 416156 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:05:39.6849119Z dist init r=2, world=4 2025-12-04T12:05:39.6849782Z [rank3]:E1204 11:58:43.466000 416157 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.6850952Z [rank3]:E1204 11:58:43.466000 416157 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.6852542Z [rank3]:E1204 11:58:43.466000 416157 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.6854117Z [rank3]:E1204 11:58:43.466000 416157 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.6855680Z [rank3]:E1204 11:58:43.466000 416157 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.6857312Z [rank3]:E1204 11:58:43.466000 416157 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.6858761Z [rank3]:E1204 11:58:43.466000 416157 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.6860570Z [rank3]:E1204 11:58:43.466000 416157 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.6862127Z [rank3]:E1204 11:58:43.466000 416157 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.6863650Z [rank3]:E1204 11:58:43.466000 416157 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.6865168Z [rank3]:E1204 11:58:43.466000 416157 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.6866644Z [rank3]:E1204 11:58:43.466000 416157 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.6868128Z [rank3]:E1204 11:58:43.466000 416157 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.6869652Z [rank3]:E1204 11:58:43.466000 416157 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.6872002Z [rank3]:E1204 11:58:43.466000 416157 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 3. CUDA driver allocated memory was 2250244096 and is now 3097493504. 2025-12-04T12:05:39.6874180Z [rank3]:E1204 11:58:43.466000 416157 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.6875424Z [rank3]:E1204 11:58:43.466000 416157 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.6877502Z [rank3]:E1204 11:58:43.466000 416157 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.6879300Z [rank3]:E1204 11:58:43.466000 416157 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.6880489Z [rank3]:E1204 11:58:43.466000 416157 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.6881888Z [rank3]:E1204 11:58:43.466000 416157 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:05:39.6882672Z dist init r=3, world=4 2025-12-04T12:05:39.6883331Z [rank1]:E1204 11:58:43.471000 416155 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.6884427Z [rank1]:E1204 11:58:43.471000 416155 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.6886008Z [rank1]:E1204 11:58:43.471000 416155 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.6887660Z [rank1]:E1204 11:58:43.471000 416155 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.6889228Z [rank1]:E1204 11:58:43.471000 416155 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.6890744Z [rank1]:E1204 11:58:43.471000 416155 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.6892184Z [rank1]:E1204 11:58:43.471000 416155 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.6893697Z [rank1]:E1204 11:58:43.471000 416155 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.6895211Z [rank1]:E1204 11:58:43.471000 416155 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.6896719Z [rank1]:E1204 11:58:43.471000 416155 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.6898232Z [rank1]:E1204 11:58:43.471000 416155 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.6899710Z [rank1]:E1204 11:58:43.471000 416155 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.6901267Z [rank1]:E1204 11:58:43.471000 416155 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.6902793Z [rank1]:E1204 11:58:43.471000 416155 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.6905186Z [rank1]:E1204 11:58:43.471000 416155 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 1. CUDA driver allocated memory was 2317352960 and is now 3164602368. 2025-12-04T12:05:39.6907191Z [rank1]:E1204 11:58:43.471000 416155 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.6907904Z [rank1]:E1204 11:58:43.471000 416155 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.6909202Z [rank1]:E1204 11:58:43.471000 416155 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.6910327Z [rank1]:E1204 11:58:43.471000 416155 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.6911110Z [rank1]:E1204 11:58:43.471000 416155 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.6911958Z [rank1]:E1204 11:58:43.471000 416155 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:05:39.6912449Z dist init r=1, world=4 2025-12-04T12:05:39.6912861Z [rank0]:E1204 11:58:43.474000 416154 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.6913605Z [rank0]:E1204 11:58:43.474000 416154 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.6914604Z [rank0]:E1204 11:58:43.474000 416154 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.6915584Z [rank0]:E1204 11:58:43.474000 416154 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.6916563Z [rank0]:E1204 11:58:43.474000 416154 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.6917474Z [rank0]:E1204 11:58:43.474000 416154 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.6918373Z [rank0]:E1204 11:58:43.474000 416154 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.6919322Z [rank0]:E1204 11:58:43.474000 416154 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.6920272Z [rank0]:E1204 11:58:43.474000 416154 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.6921257Z [rank0]:E1204 11:58:43.474000 416154 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.6922226Z [rank0]:E1204 11:58:43.474000 416154 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.6923153Z [rank0]:E1204 11:58:43.474000 416154 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.6924137Z [rank0]:E1204 11:58:43.474000 416154 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.6925090Z [rank0]:E1204 11:58:43.474000 416154 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.6926528Z [rank0]:E1204 11:58:43.474000 416154 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 0. CUDA driver allocated memory was 2459959296 and is now 3307208704. 2025-12-04T12:05:39.6927887Z [rank0]:E1204 11:58:43.474000 416154 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.6928600Z [rank0]:E1204 11:58:43.474000 416154 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.6929899Z [rank0]:E1204 11:58:43.474000 416154 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.6931086Z [rank0]:E1204 11:58:43.474000 416154 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.6931827Z [rank0]:E1204 11:58:43.474000 416154 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.6932723Z [rank0]:E1204 11:58:43.474000 416154 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:05:39.6933213Z dist init r=0, world=4 2025-12-04T12:05:39.6934033Z [rank0]:[W1204 11:58:43.621380760 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:05:39.6934871Z FAILED [12.2259s] [100%] 2025-12-04T12:05:39.6935009Z 2025-12-04T12:05:39.6935129Z =================================== FAILURES =================================== 2025-12-04T12:05:39.6935604Z _ TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda _ 2025-12-04T12:05:39.6936053Z Traceback (most recent call last): 2025-12-04T12:05:39.6936561Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:05:39.6937064Z self._join_processes(fn) 2025-12-04T12:05:39.6937567Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:05:39.6938108Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:05:39.6938659Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:05:39.6939193Z raise RuntimeError(error) 2025-12-04T12:05:39.6939502Z RuntimeError: Process 2 exited with error code 10 and exception: 2025-12-04T12:05:39.6939832Z Traceback (most recent call last): 2025-12-04T12:05:39.6941515Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.6942694Z getattr(self, test_name)() 2025-12-04T12:05:39.6943574Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.6944372Z fn() 2025-12-04T12:05:39.6945060Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.6945827Z method(*args, **kwargs) 2025-12-04T12:05:39.6947316Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.6947790Z method(*args, **kwargs) 2025-12-04T12:05:39.6948238Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.6948698Z with policy(): 2025-12-04T12:05:39.6949136Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.6949617Z raise RuntimeError(msg) 2025-12-04T12:05:39.6950583Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 2. CUDA driver allocated memory was 2300575744 and is now 3147825152. 2025-12-04T12:05:39.6951568Z 2025-12-04T12:05:39.6951725Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.6952525Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.6953168Z 2025-12-04T12:05:39.6953358Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.6953621Z 2025-12-04T12:05:39.6953625Z 2025-12-04T12:05:39.6953798Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:05:39.6954341Z Process 2 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:05:39.6955080Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-a36f8f08664e71eb.xml - 2025-12-04T12:05:39.6955762Z =========================== short test summary info ============================ 2025-12-04T12:05:39.6956600Z FAILED [12.2259s] distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda - RuntimeError: Process 2 exited with error code 10 and exception: 2025-12-04T12:05:39.6957357Z Traceback (most recent call last): 2025-12-04T12:05:39.6957876Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.6958380Z getattr(self, test_name)() 2025-12-04T12:05:39.6958864Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.6959345Z fn() 2025-12-04T12:05:39.6959756Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.6960226Z method(*args, **kwargs) 2025-12-04T12:05:39.6960715Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.6961230Z method(*args, **kwargs) 2025-12-04T12:05:39.6961677Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.6962139Z with policy(): 2025-12-04T12:05:39.6962572Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.6963057Z raise RuntimeError(msg) 2025-12-04T12:05:39.6964006Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 2. CUDA driver allocated memory was 2300575744 and is now 3147825152. 2025-12-04T12:05:39.6964888Z 2025-12-04T12:05:39.6965043Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.6965924Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.6966556Z 2025-12-04T12:05:39.6966742Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.6967129Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:05:39.6967472Z ======================= 1 failed, 9 deselected in 12.24s ======================= 2025-12-04T12:05:39.6967761Z Got exit code 1 2025-12-04T12:05:39.6990028Z Retrying single test... 2025-12-04T12:05:39.6990652Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-926d1859e2a38070.xml 2025-12-04T12:05:39.6991256Z ============================= test session starts ============================== 2025-12-04T12:05:39.6991712Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:05:39.6992129Z cachedir: .pytest_cache 2025-12-04T12:05:39.6992596Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:05:39.6993100Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:05:39.6993351Z configfile: pytest.ini 2025-12-04T12:05:39.6993826Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:05:39.6994389Z collecting ... collected 10 items / 9 deselected / 1 selected 2025-12-04T12:05:39.6995302Z stepcurrent: skipping 0 already run items. Running only test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.6996003Z Running 1 items in this shard 2025-12-04T12:05:39.6996157Z 2025-12-04T12:05:39.6996878Z distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda I1204 11:58:48.125000 416487 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 416556 2025-12-04T12:05:39.6997982Z I1204 11:58:48.126000 416487 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 416557 2025-12-04T12:05:39.6998691Z I1204 11:58:48.127000 416487 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 416558 2025-12-04T12:05:39.6999389Z I1204 11:58:48.127000 416487 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 416559 2025-12-04T12:05:39.7000530Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7001493Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7002687Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7003897Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7004826Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7005719Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7006662Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7007550Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7008765Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7009978Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7011222Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7012410Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7013328Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7014279Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7015441Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7016633Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7017134Z [rank1]:E1204 11:58:58.112000 416557 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7017850Z [rank1]:E1204 11:58:58.112000 416557 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7018866Z [rank1]:E1204 11:58:58.112000 416557 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7019862Z [rank1]:E1204 11:58:58.112000 416557 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7020889Z [rank1]:E1204 11:58:58.112000 416557 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7021817Z [rank1]:E1204 11:58:58.112000 416557 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7022729Z [rank1]:E1204 11:58:58.112000 416557 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7023683Z [rank1]:E1204 11:58:58.112000 416557 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7024642Z [rank1]:E1204 11:58:58.112000 416557 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7025591Z [rank1]:E1204 11:58:58.112000 416557 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7026623Z [rank1]:E1204 11:58:58.112000 416557 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7027555Z [rank1]:E1204 11:58:58.112000 416557 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7028491Z [rank1]:E1204 11:58:58.112000 416557 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7029448Z [rank1]:E1204 11:58:58.112000 416557 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7030951Z [rank1]:E1204 11:58:58.112000 416557 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 1. CUDA driver allocated memory was 2317352960 and is now 3164602368. 2025-12-04T12:05:39.7032335Z [rank1]:E1204 11:58:58.112000 416557 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7033062Z [rank1]:E1204 11:58:58.112000 416557 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7034430Z [rank1]:E1204 11:58:58.112000 416557 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.7035561Z [rank1]:E1204 11:58:58.112000 416557 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7036315Z [rank1]:E1204 11:58:58.112000 416557 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7037171Z [rank1]:E1204 11:58:58.112000 416557 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:05:39.7037674Z dist init r=1, world=4 2025-12-04T12:05:39.7038094Z [rank2]:E1204 11:58:58.181000 416558 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7038795Z [rank2]:E1204 11:58:58.181000 416558 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7039795Z [rank2]:E1204 11:58:58.181000 416558 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7040832Z [rank2]:E1204 11:58:58.181000 416558 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7041812Z [rank2]:E1204 11:58:58.181000 416558 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7042733Z [rank2]:E1204 11:58:58.181000 416558 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7043641Z [rank2]:E1204 11:58:58.181000 416558 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7044590Z [rank2]:E1204 11:58:58.181000 416558 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7045592Z [rank2]:E1204 11:58:58.181000 416558 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7046537Z [rank2]:E1204 11:58:58.181000 416558 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7047484Z [rank2]:E1204 11:58:58.181000 416558 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7048410Z [rank2]:E1204 11:58:58.181000 416558 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7049341Z [rank2]:E1204 11:58:58.181000 416558 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7050301Z [rank2]:E1204 11:58:58.181000 416558 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7051805Z [rank2]:E1204 11:58:58.181000 416558 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 2. CUDA driver allocated memory was 2300575744 and is now 3147825152. 2025-12-04T12:05:39.7053253Z [rank2]:E1204 11:58:58.181000 416558 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7053973Z [rank2]:E1204 11:58:58.181000 416558 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7055282Z [rank2]:E1204 11:58:58.181000 416558 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.7056413Z [rank2]:E1204 11:58:58.181000 416558 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7057160Z [rank2]:E1204 11:58:58.181000 416558 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7058014Z [rank2]:E1204 11:58:58.181000 416558 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:05:39.7058510Z dist init r=2, world=4 2025-12-04T12:05:39.7058929Z [rank3]:E1204 11:58:58.188000 416559 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7059624Z [rank3]:E1204 11:58:58.188000 416559 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7060679Z [rank3]:E1204 11:58:58.188000 416559 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7061666Z [rank3]:E1204 11:58:58.188000 416559 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7062647Z [rank3]:E1204 11:58:58.188000 416559 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7063563Z [rank3]:E1204 11:58:58.188000 416559 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7064522Z [rank3]:E1204 11:58:58.188000 416559 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7065471Z [rank3]:E1204 11:58:58.188000 416559 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7066422Z [rank3]:E1204 11:58:58.188000 416559 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7067382Z [rank3]:E1204 11:58:58.188000 416559 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7068334Z [rank3]:E1204 11:58:58.188000 416559 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7069263Z [rank3]:E1204 11:58:58.188000 416559 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7070190Z [rank3]:E1204 11:58:58.188000 416559 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7071178Z [rank3]:E1204 11:58:58.188000 416559 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7072697Z [rank3]:E1204 11:58:58.188000 416559 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 3. CUDA driver allocated memory was 2250244096 and is now 3097493504. 2025-12-04T12:05:39.7074057Z [rank3]:E1204 11:58:58.188000 416559 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7074774Z [rank3]:E1204 11:58:58.188000 416559 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7076070Z [rank3]:E1204 11:58:58.188000 416559 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.7077201Z [rank3]:E1204 11:58:58.188000 416559 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7077945Z [rank3]:E1204 11:58:58.188000 416559 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7078796Z [rank3]:E1204 11:58:58.188000 416559 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:05:39.7079288Z dist init r=3, world=4 2025-12-04T12:05:39.7079702Z [rank0]:E1204 11:58:58.196000 416556 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7080392Z [rank0]:E1204 11:58:58.196000 416556 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7081451Z [rank0]:E1204 11:58:58.196000 416556 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7082434Z [rank0]:E1204 11:58:58.196000 416556 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7083463Z [rank0]:E1204 11:58:58.196000 416556 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7084382Z [rank0]:E1204 11:58:58.196000 416556 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7085289Z [rank0]:E1204 11:58:58.196000 416556 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7086242Z [rank0]:E1204 11:58:58.196000 416556 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7087189Z [rank0]:E1204 11:58:58.196000 416556 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7088159Z [rank0]:E1204 11:58:58.196000 416556 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7089129Z [rank0]:E1204 11:58:58.196000 416556 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7089823Z [rank0]:E1204 11:58:58.196000 416556 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7090303Z [rank0]:E1204 11:58:58.196000 416556 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7090799Z [rank0]:E1204 11:58:58.196000 416556 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7091494Z [rank0]:E1204 11:58:58.196000 416556 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 0. CUDA driver allocated memory was 2459959296 and is now 3307208704. 2025-12-04T12:05:39.7092155Z [rank0]:E1204 11:58:58.196000 416556 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7092500Z [rank0]:E1204 11:58:58.196000 416556 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7093126Z [rank0]:E1204 11:58:58.196000 416556 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.7093668Z [rank0]:E1204 11:58:58.196000 416556 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7094030Z [rank0]:E1204 11:58:58.196000 416556 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7094440Z [rank0]:E1204 11:58:58.196000 416556 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:05:39.7094685Z dist init r=0, world=4 2025-12-04T12:05:39.7095101Z [rank0]:[W1204 11:58:58.236089233 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:05:39.7095511Z FAILED [12.0243s] [100%] 2025-12-04T12:05:39.7095582Z 2025-12-04T12:05:39.7095672Z =================================== FAILURES =================================== 2025-12-04T12:05:39.7095908Z _ TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda _ 2025-12-04T12:05:39.7096131Z Traceback (most recent call last): 2025-12-04T12:05:39.7096378Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:05:39.7096623Z self._join_processes(fn) 2025-12-04T12:05:39.7096873Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:05:39.7097140Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:05:39.7097408Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:05:39.7097667Z raise RuntimeError(error) 2025-12-04T12:05:39.7097823Z RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7097989Z Traceback (most recent call last): 2025-12-04T12:05:39.7098229Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7098472Z getattr(self, test_name)() 2025-12-04T12:05:39.7098702Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7098934Z fn() 2025-12-04T12:05:39.7099141Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7099403Z method(*args, **kwargs) 2025-12-04T12:05:39.7099624Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7099853Z method(*args, **kwargs) 2025-12-04T12:05:39.7100073Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7100298Z with policy(): 2025-12-04T12:05:39.7100507Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7100759Z raise RuntimeError(msg) 2025-12-04T12:05:39.7101214Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 0. CUDA driver allocated memory was 2459959296 and is now 3307208704. 2025-12-04T12:05:39.7101638Z 2025-12-04T12:05:39.7101718Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7102101Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.7102406Z 2025-12-04T12:05:39.7102500Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7102625Z 2025-12-04T12:05:39.7102627Z 2025-12-04T12:05:39.7102710Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:05:39.7102914Z Process 0 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:05:39.7103267Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-926d1859e2a38070.xml - 2025-12-04T12:05:39.7103596Z =========================== short test summary info ============================ 2025-12-04T12:05:39.7103981Z FAILED [12.0243s] distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda - RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7104343Z Traceback (most recent call last): 2025-12-04T12:05:39.7104618Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7104860Z getattr(self, test_name)() 2025-12-04T12:05:39.7105091Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7105321Z fn() 2025-12-04T12:05:39.7105522Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7105753Z method(*args, **kwargs) 2025-12-04T12:05:39.7106009Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7106237Z method(*args, **kwargs) 2025-12-04T12:05:39.7106455Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7106678Z with policy(): 2025-12-04T12:05:39.7106890Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7107128Z raise RuntimeError(msg) 2025-12-04T12:05:39.7107587Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 0. CUDA driver allocated memory was 2459959296 and is now 3307208704. 2025-12-04T12:05:39.7108035Z 2025-12-04T12:05:39.7108115Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7108502Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.7108810Z 2025-12-04T12:05:39.7108902Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7109093Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:05:39.7109263Z ======================= 1 failed, 9 deselected in 12.04s ======================= 2025-12-04T12:05:39.7109405Z Got exit code 1 2025-12-04T12:05:39.7109685Z FAILED CONSISTENTLY: test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.7110066Z Test failed consistently, continuing with the rest of the tests due to continue-through-error being set 2025-12-04T12:05:39.7110425Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-11f63fc7c73217d3.xml 2025-12-04T12:05:39.7110741Z ============================= test session starts ============================== 2025-12-04T12:05:39.7110957Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:05:39.7111151Z cachedir: .pytest_cache 2025-12-04T12:05:39.7111377Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:05:39.7111617Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:05:39.7111741Z configfile: pytest.ini 2025-12-04T12:05:39.7111969Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:05:39.7112240Z collecting ... collected 10 items / 1 deselected / 9 selected 2025-12-04T12:05:39.7112406Z stepcurrent: skipping 1 already run items. 2025-12-04T12:05:39.7112542Z Running 9 items in this shard 2025-12-04T12:05:39.7112618Z 2025-12-04T12:05:39.7112995Z distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda I1204 11:59:02.832000 416889 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 416958 2025-12-04T12:05:39.7113533Z I1204 11:59:02.833000 416889 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 416959 2025-12-04T12:05:39.7113878Z I1204 11:59:02.833000 416889 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 416960 2025-12-04T12:05:39.7114221Z I1204 11:59:02.834000 416889 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 416961 2025-12-04T12:05:39.7114771Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7115214Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7115797Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7116382Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7116834Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7117299Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7117727Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7118162Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7118727Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7119310Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7119888Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7120467Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7120950Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7121389Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7121956Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7122854Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7123130Z [rank1]:E1204 11:59:12.909000 416959 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7123475Z [rank1]:E1204 11:59:12.909000 416959 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7123966Z [rank1]:E1204 11:59:12.909000 416959 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7124445Z [rank1]:E1204 11:59:12.909000 416959 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7124929Z [rank1]:E1204 11:59:12.909000 416959 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7125378Z [rank1]:E1204 11:59:12.909000 416959 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7125819Z [rank1]:E1204 11:59:12.909000 416959 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7126281Z [rank1]:E1204 11:59:12.909000 416959 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7126744Z [rank1]:E1204 11:59:12.909000 416959 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7127240Z [rank1]:E1204 11:59:12.909000 416959 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7127704Z [rank1]:E1204 11:59:12.909000 416959 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7128152Z [rank1]:E1204 11:59:12.909000 416959 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7128609Z [rank1]:E1204 11:59:12.909000 416959 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7129090Z [rank1]:E1204 11:59:12.909000 416959 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7129794Z [rank1]:E1204 11:59:12.909000 416959 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 1. CUDA driver allocated memory was 2317352960 and is now 3164602368. 2025-12-04T12:05:39.7130455Z [rank1]:E1204 11:59:12.909000 416959 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7130846Z [rank1]:E1204 11:59:12.909000 416959 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7131477Z [rank1]:E1204 11:59:12.909000 416959 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7132025Z [rank1]:E1204 11:59:12.909000 416959 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7132418Z [rank1]:E1204 11:59:12.909000 416959 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7132831Z [rank1]:E1204 11:59:12.909000 416959 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:05:39.7133076Z dist init r=1, world=4 2025-12-04T12:05:39.7133282Z [rank3]:E1204 11:59:12.974000 416961 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7133619Z [rank3]:E1204 11:59:12.974000 416961 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7134102Z [rank3]:E1204 11:59:12.974000 416961 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7134577Z [rank3]:E1204 11:59:12.974000 416961 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7135056Z [rank3]:E1204 11:59:12.974000 416961 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7135502Z [rank3]:E1204 11:59:12.974000 416961 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7135942Z [rank3]:E1204 11:59:12.974000 416961 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7136432Z [rank3]:E1204 11:59:12.974000 416961 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7136893Z [rank3]:E1204 11:59:12.974000 416961 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7137352Z [rank3]:E1204 11:59:12.974000 416961 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7137813Z [rank3]:E1204 11:59:12.974000 416961 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7138261Z [rank3]:E1204 11:59:12.974000 416961 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7138715Z [rank3]:E1204 11:59:12.974000 416961 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7139178Z [rank3]:E1204 11:59:12.974000 416961 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7139878Z [rank3]:E1204 11:59:12.974000 416961 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 3. CUDA driver allocated memory was 2250244096 and is now 3097493504. 2025-12-04T12:05:39.7140546Z [rank3]:E1204 11:59:12.974000 416961 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7140938Z [rank3]:E1204 11:59:12.974000 416961 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7141601Z [rank3]:E1204 11:59:12.974000 416961 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7142145Z [rank3]:E1204 11:59:12.974000 416961 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7142510Z [rank3]:E1204 11:59:12.974000 416961 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7142921Z [rank3]:E1204 11:59:12.974000 416961 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:05:39.7143167Z dist init r=3, world=4 2025-12-04T12:05:39.7143373Z [rank2]:E1204 11:59:12.977000 416960 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7143711Z [rank2]:E1204 11:59:12.977000 416960 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7144195Z [rank2]:E1204 11:59:12.977000 416960 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7144671Z [rank2]:E1204 11:59:12.977000 416960 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7145146Z [rank2]:E1204 11:59:12.977000 416960 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7145621Z [rank2]:E1204 11:59:12.977000 416960 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7146059Z [rank2]:E1204 11:59:12.977000 416960 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7146522Z [rank2]:E1204 11:59:12.977000 416960 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7146984Z [rank2]:E1204 11:59:12.977000 416960 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7147443Z [rank2]:E1204 11:59:12.977000 416960 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7147910Z [rank2]:E1204 11:59:12.977000 416960 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7148389Z [rank2]:E1204 11:59:12.977000 416960 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7148844Z [rank2]:E1204 11:59:12.977000 416960 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7149307Z [rank2]:E1204 11:59:12.977000 416960 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7150006Z [rank2]:E1204 11:59:12.977000 416960 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 2. CUDA driver allocated memory was 2300575744 and is now 3147825152. 2025-12-04T12:05:39.7150694Z [rank2]:E1204 11:59:12.977000 416960 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7151069Z [rank2]:E1204 11:59:12.977000 416960 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7151696Z [rank2]:E1204 11:59:12.977000 416960 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7152240Z [rank2]:E1204 11:59:12.977000 416960 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7152606Z [rank2]:E1204 11:59:12.977000 416960 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7153018Z [rank2]:E1204 11:59:12.977000 416960 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:05:39.7153263Z dist init r=2, world=4 2025-12-04T12:05:39.7153468Z [rank0]:E1204 11:59:13.061000 416958 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7153805Z [rank0]:E1204 11:59:13.061000 416958 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7154291Z [rank0]:E1204 11:59:13.061000 416958 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7154801Z [rank0]:E1204 11:59:13.061000 416958 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7155274Z [rank0]:E1204 11:59:13.061000 416958 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7155720Z [rank0]:E1204 11:59:13.061000 416958 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7156156Z [rank0]:E1204 11:59:13.061000 416958 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7156616Z [rank0]:E1204 11:59:13.061000 416958 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7157077Z [rank0]:E1204 11:59:13.061000 416958 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7157538Z [rank0]:E1204 11:59:13.061000 416958 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7158001Z [rank0]:E1204 11:59:13.061000 416958 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7158449Z [rank0]:E1204 11:59:13.061000 416958 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7158903Z [rank0]:E1204 11:59:13.061000 416958 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7159374Z [rank0]:E1204 11:59:13.061000 416958 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7160094Z [rank0]:E1204 11:59:13.061000 416958 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 0. CUDA driver allocated memory was 2459959296 and is now 3307208704. 2025-12-04T12:05:39.7160790Z [rank0]:E1204 11:59:13.061000 416958 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7161142Z [rank0]:E1204 11:59:13.061000 416958 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7161772Z [rank0]:E1204 11:59:13.061000 416958 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7162317Z [rank0]:E1204 11:59:13.061000 416958 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7162683Z [rank0]:E1204 11:59:13.061000 416958 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7163096Z [rank0]:E1204 11:59:13.061000 416958 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:05:39.7163341Z dist init r=0, world=4 2025-12-04T12:05:39.7163744Z [rank0]:[W1204 11:59:13.124353955 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:05:39.7164191Z FAILED [12.1271s] [ 11%] 2025-12-04T12:05:39.7164258Z 2025-12-04T12:05:39.7164315Z =================================== FAILURES =================================== 2025-12-04T12:05:39.7164543Z _ TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda _ 2025-12-04T12:05:39.7164760Z Traceback (most recent call last): 2025-12-04T12:05:39.7165001Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:05:39.7165240Z self._join_processes(fn) 2025-12-04T12:05:39.7165482Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:05:39.7165741Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:05:39.7166006Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:05:39.7166260Z raise RuntimeError(error) 2025-12-04T12:05:39.7166409Z RuntimeError: Process 3 exited with error code 10 and exception: 2025-12-04T12:05:39.7166567Z Traceback (most recent call last): 2025-12-04T12:05:39.7166803Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7167039Z getattr(self, test_name)() 2025-12-04T12:05:39.7167265Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7167492Z fn() 2025-12-04T12:05:39.7167689Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7167916Z method(*args, **kwargs) 2025-12-04T12:05:39.7168135Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7168361Z method(*args, **kwargs) 2025-12-04T12:05:39.7168575Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7168795Z with policy(): 2025-12-04T12:05:39.7169032Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7169261Z raise RuntimeError(msg) 2025-12-04T12:05:39.7169715Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 3. CUDA driver allocated memory was 2250244096 and is now 3097493504. 2025-12-04T12:05:39.7170132Z 2025-12-04T12:05:39.7170207Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7170587Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7170926Z 2025-12-04T12:05:39.7171013Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7171137Z 2025-12-04T12:05:39.7171138Z 2025-12-04T12:05:39.7171217Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:05:39.7171415Z Process 3 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:05:39.7171767Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-11f63fc7c73217d3.xml - 2025-12-04T12:05:39.7172090Z =========================== short test summary info ============================ 2025-12-04T12:05:39.7172500Z FAILED [12.1271s] distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda - RuntimeError: Process 3 exited with error code 10 and exception: 2025-12-04T12:05:39.7172856Z Traceback (most recent call last): 2025-12-04T12:05:39.7173096Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7173335Z getattr(self, test_name)() 2025-12-04T12:05:39.7173563Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7173795Z fn() 2025-12-04T12:05:39.7173999Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7174229Z method(*args, **kwargs) 2025-12-04T12:05:39.7174454Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7174684Z method(*args, **kwargs) 2025-12-04T12:05:39.7174904Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7175130Z with policy(): 2025-12-04T12:05:39.7175339Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7175571Z raise RuntimeError(msg) 2025-12-04T12:05:39.7176025Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 3. CUDA driver allocated memory was 2250244096 and is now 3097493504. 2025-12-04T12:05:39.7176440Z 2025-12-04T12:05:39.7176517Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7176898Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7177199Z 2025-12-04T12:05:39.7177290Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7177481Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:05:39.7177677Z ======================= 1 failed, 1 deselected in 12.15s ======================= 2025-12-04T12:05:39.7177816Z Got exit code 1 2025-12-04T12:05:39.7177914Z Retrying single test... 2025-12-04T12:05:39.7178170Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-7c34caf3bc48413c.xml 2025-12-04T12:05:39.7178448Z ============================= test session starts ============================== 2025-12-04T12:05:39.7178653Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:05:39.7178839Z cachedir: .pytest_cache 2025-12-04T12:05:39.7179062Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:05:39.7179298Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:05:39.7179418Z configfile: pytest.ini 2025-12-04T12:05:39.7179648Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:05:39.7179917Z collecting ... collected 10 items / 9 deselected / 1 selected 2025-12-04T12:05:39.7180288Z stepcurrent: skipping 1 already run items. Running only test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7180664Z Running 1 items in this shard 2025-12-04T12:05:39.7180737Z 2025-12-04T12:05:39.7181115Z distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda I1204 11:59:17.608000 417291 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 417360 2025-12-04T12:05:39.7181676Z I1204 11:59:17.609000 417291 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 417361 2025-12-04T12:05:39.7182018Z I1204 11:59:17.609000 417291 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 417362 2025-12-04T12:05:39.7182354Z I1204 11:59:17.610000 417291 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 417363 2025-12-04T12:05:39.7182897Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7183336Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7183765Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7184200Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7184771Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7185350Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7185925Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7186503Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7186976Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7187409Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7187971Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7188548Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7188996Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7189429Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7189996Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7190636Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7190877Z [rank0]:E1204 11:59:27.530000 417360 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7191219Z [rank0]:E1204 11:59:27.530000 417360 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7191707Z [rank0]:E1204 11:59:27.530000 417360 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7192183Z [rank0]:E1204 11:59:27.530000 417360 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7192660Z [rank0]:E1204 11:59:27.530000 417360 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7193109Z [rank0]:E1204 11:59:27.530000 417360 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7193548Z [rank0]:E1204 11:59:27.530000 417360 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7194010Z [rank0]:E1204 11:59:27.530000 417360 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7194473Z [rank0]:E1204 11:59:27.530000 417360 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7194931Z [rank0]:E1204 11:59:27.530000 417360 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7195394Z [rank0]:E1204 11:59:27.530000 417360 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7195876Z [rank0]:E1204 11:59:27.530000 417360 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7196362Z [rank0]:E1204 11:59:27.530000 417360 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7196828Z [rank0]:E1204 11:59:27.530000 417360 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7197526Z [rank0]:E1204 11:59:27.530000 417360 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 0. CUDA driver allocated memory was 2462056448 and is now 3307208704. 2025-12-04T12:05:39.7198187Z [rank0]:E1204 11:59:27.530000 417360 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7198558Z [rank0]:E1204 11:59:27.530000 417360 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7199190Z [rank0]:E1204 11:59:27.530000 417360 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7199734Z [rank0]:E1204 11:59:27.530000 417360 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7200131Z [rank0]:E1204 11:59:27.530000 417360 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7200544Z [rank0]:E1204 11:59:27.530000 417360 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:05:39.7200823Z dist init r=0, world=4 2025-12-04T12:05:39.7201030Z [rank2]:E1204 11:59:27.696000 417362 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7201365Z [rank2]:E1204 11:59:27.696000 417362 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7201847Z [rank2]:E1204 11:59:27.696000 417362 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7202326Z [rank2]:E1204 11:59:27.696000 417362 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7202800Z [rank2]:E1204 11:59:27.696000 417362 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7203250Z [rank2]:E1204 11:59:27.696000 417362 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7203689Z [rank2]:E1204 11:59:27.696000 417362 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7204150Z [rank2]:E1204 11:59:27.696000 417362 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7204614Z [rank2]:E1204 11:59:27.696000 417362 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7205073Z [rank2]:E1204 11:59:27.696000 417362 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7205561Z [rank2]:E1204 11:59:27.696000 417362 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7206011Z [rank2]:E1204 11:59:27.696000 417362 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7206463Z [rank2]:E1204 11:59:27.696000 417362 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7206926Z [rank2]:E1204 11:59:27.696000 417362 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7207624Z [rank2]:E1204 11:59:27.696000 417362 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 2. CUDA driver allocated memory was 2300575744 and is now 3147825152. 2025-12-04T12:05:39.7208279Z [rank2]:E1204 11:59:27.696000 417362 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7208628Z [rank2]:E1204 11:59:27.696000 417362 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7209291Z [rank2]:E1204 11:59:27.696000 417362 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7209834Z [rank2]:E1204 11:59:27.696000 417362 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7210200Z [rank2]:E1204 11:59:27.696000 417362 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7210653Z [rank2]:E1204 11:59:27.696000 417362 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:05:39.7210896Z dist init r=2, world=4 2025-12-04T12:05:39.7211296Z [rank0]:[W1204 11:59:27.596604302 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:05:39.7211809Z [rank3]:E1204 11:59:27.764000 417363 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7212146Z [rank3]:E1204 11:59:27.764000 417363 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7212632Z [rank3]:E1204 11:59:27.764000 417363 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7213108Z [rank3]:E1204 11:59:27.764000 417363 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7213583Z [rank3]:E1204 11:59:27.764000 417363 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7214031Z [rank3]:E1204 11:59:27.764000 417363 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7214503Z [rank3]:E1204 11:59:27.764000 417363 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7214965Z [rank3]:E1204 11:59:27.764000 417363 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7215425Z [rank3]:E1204 11:59:27.764000 417363 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7215887Z [rank3]:E1204 11:59:27.764000 417363 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7216351Z [rank3]:E1204 11:59:27.764000 417363 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7216800Z [rank3]:E1204 11:59:27.764000 417363 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7217255Z [rank3]:E1204 11:59:27.764000 417363 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7217717Z [rank3]:E1204 11:59:27.764000 417363 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7218412Z [rank3]:E1204 11:59:27.764000 417363 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 3. CUDA driver allocated memory was 2250244096 and is now 3097493504. 2025-12-04T12:05:39.7219097Z [rank3]:E1204 11:59:27.764000 417363 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7219448Z [rank3]:E1204 11:59:27.764000 417363 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7220079Z [rank3]:E1204 11:59:27.764000 417363 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7220667Z [rank3]:E1204 11:59:27.764000 417363 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7221030Z [rank3]:E1204 11:59:27.764000 417363 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7221441Z [rank3]:E1204 11:59:27.764000 417363 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:05:39.7221685Z dist init r=3, world=4 2025-12-04T12:05:39.7221888Z [rank1]:E1204 11:59:27.838000 417361 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7222224Z [rank1]:E1204 11:59:27.838000 417361 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7222710Z [rank1]:E1204 11:59:27.838000 417361 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7223188Z [rank1]:E1204 11:59:27.838000 417361 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7223661Z [rank1]:E1204 11:59:27.838000 417361 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7224133Z [rank1]:E1204 11:59:27.838000 417361 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7224572Z [rank1]:E1204 11:59:27.838000 417361 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7225032Z [rank1]:E1204 11:59:27.838000 417361 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7225494Z [rank1]:E1204 11:59:27.838000 417361 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7225954Z [rank1]:E1204 11:59:27.838000 417361 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7226415Z [rank1]:E1204 11:59:27.838000 417361 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7226862Z [rank1]:E1204 11:59:27.838000 417361 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7227312Z [rank1]:E1204 11:59:27.838000 417361 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7227802Z [rank1]:E1204 11:59:27.838000 417361 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7228501Z [rank1]:E1204 11:59:27.838000 417361 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 1. CUDA driver allocated memory was 2317352960 and is now 3164602368. 2025-12-04T12:05:39.7229162Z [rank1]:E1204 11:59:27.838000 417361 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7229510Z [rank1]:E1204 11:59:27.838000 417361 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7230138Z [rank1]:E1204 11:59:27.838000 417361 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7230724Z [rank1]:E1204 11:59:27.838000 417361 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7231089Z [rank1]:E1204 11:59:27.838000 417361 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7231500Z [rank1]:E1204 11:59:27.838000 417361 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:05:39.7231743Z dist init r=1, world=4 2025-12-04T12:05:39.7231850Z FAILED [11.8260s] [100%] 2025-12-04T12:05:39.7231921Z 2025-12-04T12:05:39.7231979Z =================================== FAILURES =================================== 2025-12-04T12:05:39.7232215Z _ TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda _ 2025-12-04T12:05:39.7232437Z Traceback (most recent call last): 2025-12-04T12:05:39.7232685Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:05:39.7232929Z self._join_processes(fn) 2025-12-04T12:05:39.7233209Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:05:39.7233476Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:05:39.7233747Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:05:39.7234006Z raise RuntimeError(error) 2025-12-04T12:05:39.7234161Z RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7234328Z Traceback (most recent call last): 2025-12-04T12:05:39.7234569Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7234811Z getattr(self, test_name)() 2025-12-04T12:05:39.7235045Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7235282Z fn() 2025-12-04T12:05:39.7235486Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7235719Z method(*args, **kwargs) 2025-12-04T12:05:39.7235942Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7236174Z method(*args, **kwargs) 2025-12-04T12:05:39.7236394Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7236652Z with policy(): 2025-12-04T12:05:39.7236866Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7237101Z raise RuntimeError(msg) 2025-12-04T12:05:39.7237560Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 0. CUDA driver allocated memory was 2462056448 and is now 3307208704. 2025-12-04T12:05:39.7237982Z 2025-12-04T12:05:39.7238059Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7238441Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7238751Z 2025-12-04T12:05:39.7238840Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7238969Z 2025-12-04T12:05:39.7238971Z 2025-12-04T12:05:39.7239049Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:05:39.7239252Z Process 0 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:05:39.7239611Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-7c34caf3bc48413c.xml - 2025-12-04T12:05:39.7239941Z =========================== short test summary info ============================ 2025-12-04T12:05:39.7240326Z FAILED [11.8260s] distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda - RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7240720Z Traceback (most recent call last): 2025-12-04T12:05:39.7240963Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7241206Z getattr(self, test_name)() 2025-12-04T12:05:39.7241437Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7241666Z fn() 2025-12-04T12:05:39.7241896Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7242126Z method(*args, **kwargs) 2025-12-04T12:05:39.7242346Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7242572Z method(*args, **kwargs) 2025-12-04T12:05:39.7242791Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7243016Z with policy(): 2025-12-04T12:05:39.7243227Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7243455Z raise RuntimeError(msg) 2025-12-04T12:05:39.7243916Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 0. CUDA driver allocated memory was 2462056448 and is now 3307208704. 2025-12-04T12:05:39.7244335Z 2025-12-04T12:05:39.7244413Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7244794Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7245096Z 2025-12-04T12:05:39.7245186Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7245405Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:05:39.7245572Z ======================= 1 failed, 9 deselected in 11.84s ======================= 2025-12-04T12:05:39.7245709Z Got exit code 1 2025-12-04T12:05:39.7245808Z Retrying single test... 2025-12-04T12:05:39.7246065Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-4364efd8473c8991.xml 2025-12-04T12:05:39.7246350Z ============================= test session starts ============================== 2025-12-04T12:05:39.7246560Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:05:39.7246748Z cachedir: .pytest_cache 2025-12-04T12:05:39.7246972Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:05:39.7247218Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:05:39.7247338Z configfile: pytest.ini 2025-12-04T12:05:39.7247568Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:05:39.7247841Z collecting ... collected 10 items / 9 deselected / 1 selected 2025-12-04T12:05:39.7248257Z stepcurrent: skipping 1 already run items. Running only test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7248598Z Running 1 items in this shard 2025-12-04T12:05:39.7248671Z 2025-12-04T12:05:39.7249013Z distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda I1204 11:59:32.350000 417693 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 417762 2025-12-04T12:05:39.7249548Z I1204 11:59:32.350000 417693 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 417763 2025-12-04T12:05:39.7249891Z I1204 11:59:32.351000 417693 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 417764 2025-12-04T12:05:39.7250232Z I1204 11:59:32.352000 417693 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 417765 2025-12-04T12:05:39.7250859Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7251298Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7251869Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7252459Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7252913Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7253348Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7253911Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7254524Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7254969Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7255397Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7255958Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7256528Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7256974Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7257403Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7257970Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7258546Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7258788Z [rank2]:E1204 11:59:42.514000 417764 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7259135Z [rank2]:E1204 11:59:42.514000 417764 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7259625Z [rank2]:E1204 11:59:42.514000 417764 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7260126Z [rank2]:E1204 11:59:42.514000 417764 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7260640Z [rank2]:E1204 11:59:42.514000 417764 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7261083Z [rank2]:E1204 11:59:42.514000 417764 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7261518Z [rank2]:E1204 11:59:42.514000 417764 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7261983Z [rank2]:E1204 11:59:42.514000 417764 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7262450Z [rank2]:E1204 11:59:42.514000 417764 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7262913Z [rank2]:E1204 11:59:42.514000 417764 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7263378Z [rank2]:E1204 11:59:42.514000 417764 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7263865Z [rank2]:E1204 11:59:42.514000 417764 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7264317Z [rank2]:E1204 11:59:42.514000 417764 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7264781Z [rank2]:E1204 11:59:42.514000 417764 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7265480Z [rank2]:E1204 11:59:42.514000 417764 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 2. CUDA driver allocated memory was 2300575744 and is now 3147825152. 2025-12-04T12:05:39.7266144Z [rank2]:E1204 11:59:42.514000 417764 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7266500Z [rank2]:E1204 11:59:42.514000 417764 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7267136Z [rank2]:E1204 11:59:42.514000 417764 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7267685Z [rank2]:E1204 11:59:42.514000 417764 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7268052Z [rank2]:E1204 11:59:42.514000 417764 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7268466Z [rank2]:E1204 11:59:42.514000 417764 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:05:39.7268711Z dist init r=2, world=4 2025-12-04T12:05:39.7268915Z [rank3]:E1204 11:59:42.575000 417765 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7269284Z [rank3]:E1204 11:59:42.575000 417765 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7269764Z [rank3]:E1204 11:59:42.575000 417765 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7270237Z [rank3]:E1204 11:59:42.575000 417765 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7270751Z [rank3]:E1204 11:59:42.575000 417765 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7271204Z [rank3]:E1204 11:59:42.575000 417765 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7271649Z [rank3]:E1204 11:59:42.575000 417765 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7272116Z [rank3]:E1204 11:59:42.575000 417765 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7272582Z [rank3]:E1204 11:59:42.575000 417765 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7273078Z [rank3]:E1204 11:59:42.575000 417765 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7273538Z [rank3]:E1204 11:59:42.575000 417765 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7273989Z [rank3]:E1204 11:59:42.575000 417765 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7274442Z [rank3]:E1204 11:59:42.575000 417765 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7274908Z [rank3]:E1204 11:59:42.575000 417765 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7275607Z [rank3]:E1204 11:59:42.575000 417765 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 3. CUDA driver allocated memory was 2250244096 and is now 3097493504. 2025-12-04T12:05:39.7276269Z [rank3]:E1204 11:59:42.575000 417765 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7276617Z [rank3]:E1204 11:59:42.575000 417765 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7277251Z [rank3]:E1204 11:59:42.575000 417765 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7277803Z [rank3]:E1204 11:59:42.575000 417765 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7278171Z [rank3]:E1204 11:59:42.575000 417765 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7278611Z [rank3]:E1204 11:59:42.575000 417765 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:05:39.7278854Z dist init r=3, world=4 2025-12-04T12:05:39.7279058Z [rank0]:E1204 11:59:42.581000 417762 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7279392Z [rank0]:E1204 11:59:42.581000 417762 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7279878Z [rank0]:E1204 11:59:42.581000 417762 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7280362Z [rank0]:E1204 11:59:42.581000 417762 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7280881Z [rank0]:E1204 11:59:42.581000 417762 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7281330Z [rank0]:E1204 11:59:42.581000 417762 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7281767Z [rank0]:E1204 11:59:42.581000 417762 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7282227Z [rank0]:E1204 11:59:42.581000 417762 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7282717Z [rank0]:E1204 11:59:42.581000 417762 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7283178Z [rank0]:E1204 11:59:42.581000 417762 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7283639Z [rank0]:E1204 11:59:42.581000 417762 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7284091Z [rank0]:E1204 11:59:42.581000 417762 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7284546Z [rank0]:E1204 11:59:42.581000 417762 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7285011Z [rank0]:E1204 11:59:42.581000 417762 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7285709Z [rank0]:E1204 11:59:42.581000 417762 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 0. CUDA driver allocated memory was 2459959296 and is now 3307208704. 2025-12-04T12:05:39.7286364Z [rank0]:E1204 11:59:42.581000 417762 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7286706Z [rank0]:E1204 11:59:42.581000 417762 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7287334Z [rank0]:E1204 11:59:42.581000 417762 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7287902Z [rank0]:E1204 11:59:42.581000 417762 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7288262Z [rank0]:E1204 11:59:42.581000 417762 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7288669Z [rank0]:E1204 11:59:42.581000 417762 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:05:39.7288912Z dist init r=0, world=4 2025-12-04T12:05:39.7289115Z [rank1]:E1204 11:59:42.606000 417763 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7289459Z [rank1]:E1204 11:59:42.606000 417763 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7289942Z [rank1]:E1204 11:59:42.606000 417763 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7290418Z [rank1]:E1204 11:59:42.606000 417763 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7290927Z [rank1]:E1204 11:59:42.606000 417763 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7291379Z [rank1]:E1204 11:59:42.606000 417763 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7291845Z [rank1]:E1204 11:59:42.606000 417763 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7292305Z [rank1]:E1204 11:59:42.606000 417763 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7292764Z [rank1]:E1204 11:59:42.606000 417763 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7293221Z [rank1]:E1204 11:59:42.606000 417763 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7293717Z [rank1]:E1204 11:59:42.606000 417763 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7294166Z [rank1]:E1204 11:59:42.606000 417763 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7294618Z [rank1]:E1204 11:59:42.606000 417763 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7295077Z [rank1]:E1204 11:59:42.606000 417763 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7295768Z [rank1]:E1204 11:59:42.606000 417763 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 1. CUDA driver allocated memory was 2317352960 and is now 3164602368. 2025-12-04T12:05:39.7296421Z [rank1]:E1204 11:59:42.606000 417763 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7296763Z [rank1]:E1204 11:59:42.606000 417763 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7297425Z [rank1]:E1204 11:59:42.606000 417763 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7297968Z [rank1]:E1204 11:59:42.606000 417763 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7298330Z [rank1]:E1204 11:59:42.606000 417763 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7298739Z [rank1]:E1204 11:59:42.606000 417763 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:05:39.7298978Z dist init r=1, world=4 2025-12-04T12:05:39.7299373Z [rank0]:[W1204 11:59:42.711437878 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:05:39.7299778Z FAILED [12.3263s] [100%] 2025-12-04T12:05:39.7299844Z 2025-12-04T12:05:39.7299905Z =================================== FAILURES =================================== 2025-12-04T12:05:39.7300136Z _ TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda _ 2025-12-04T12:05:39.7300354Z Traceback (most recent call last): 2025-12-04T12:05:39.7300661Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:05:39.7300927Z self._join_processes(fn) 2025-12-04T12:05:39.7301173Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:05:39.7301437Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:05:39.7301707Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:05:39.7301966Z raise RuntimeError(error) 2025-12-04T12:05:39.7302120Z RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7302281Z Traceback (most recent call last): 2025-12-04T12:05:39.7302522Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7302765Z getattr(self, test_name)() 2025-12-04T12:05:39.7302995Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7303224Z fn() 2025-12-04T12:05:39.7303460Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7303691Z method(*args, **kwargs) 2025-12-04T12:05:39.7303915Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7304146Z method(*args, **kwargs) 2025-12-04T12:05:39.7304363Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7304589Z with policy(): 2025-12-04T12:05:39.7304799Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7305032Z raise RuntimeError(msg) 2025-12-04T12:05:39.7318295Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 0. CUDA driver allocated memory was 2459959296 and is now 3307208704. 2025-12-04T12:05:39.7318745Z 2025-12-04T12:05:39.7318882Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7319269Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7319580Z 2025-12-04T12:05:39.7319672Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7319801Z 2025-12-04T12:05:39.7319863Z Process 2 exited with error code 10 and exception: 2025-12-04T12:05:39.7320009Z Traceback (most recent call last): 2025-12-04T12:05:39.7320262Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7320508Z getattr(self, test_name)() 2025-12-04T12:05:39.7320789Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7320828Z fn() 2025-12-04T12:05:39.7320983Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7321027Z method(*args, **kwargs) 2025-12-04T12:05:39.7321175Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7321219Z method(*args, **kwargs) 2025-12-04T12:05:39.7321367Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7321452Z with policy(): 2025-12-04T12:05:39.7321603Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7321647Z raise RuntimeError(msg) 2025-12-04T12:05:39.7322040Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 2. CUDA driver allocated memory was 2300575744 and is now 3147825152. 2025-12-04T12:05:39.7322047Z 2025-12-04T12:05:39.7322123Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7322398Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7322400Z 2025-12-04T12:05:39.7322489Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7322492Z 2025-12-04T12:05:39.7322494Z 2025-12-04T12:05:39.7322578Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:05:39.7322667Z Process 0 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:05:39.7322908Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-4364efd8473c8991.xml - 2025-12-04T12:05:39.7322970Z =========================== short test summary info ============================ 2025-12-04T12:05:39.7323259Z FAILED [12.3263s] distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda - RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7323307Z Traceback (most recent call last): 2025-12-04T12:05:39.7323470Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7323518Z getattr(self, test_name)() 2025-12-04T12:05:39.7323676Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7323712Z fn() 2025-12-04T12:05:39.7323901Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7323944Z method(*args, **kwargs) 2025-12-04T12:05:39.7324094Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7324137Z method(*args, **kwargs) 2025-12-04T12:05:39.7324285Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7324325Z with policy(): 2025-12-04T12:05:39.7324477Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7324521Z raise RuntimeError(msg) 2025-12-04T12:05:39.7324906Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 0. CUDA driver allocated memory was 2459959296 and is now 3307208704. 2025-12-04T12:05:39.7324909Z 2025-12-04T12:05:39.7324992Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7325260Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7325264Z 2025-12-04T12:05:39.7325349Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7325373Z 2025-12-04T12:05:39.7325434Z Process 2 exited with error code 10 and exception: 2025-12-04T12:05:39.7325479Z Traceback (most recent call last): 2025-12-04T12:05:39.7325641Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7325684Z getattr(self, test_name)() 2025-12-04T12:05:39.7325848Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7325882Z fn() 2025-12-04T12:05:39.7326032Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7326073Z method(*args, **kwargs) 2025-12-04T12:05:39.7326224Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7326264Z method(*args, **kwargs) 2025-12-04T12:05:39.7326416Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7326452Z with policy(): 2025-12-04T12:05:39.7326605Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7326646Z raise RuntimeError(msg) 2025-12-04T12:05:39.7327031Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 2. CUDA driver allocated memory was 2300575744 and is now 3147825152. 2025-12-04T12:05:39.7327033Z 2025-12-04T12:05:39.7327109Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7327373Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7327377Z 2025-12-04T12:05:39.7327466Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7327531Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:05:39.7327598Z ======================= 1 failed, 9 deselected in 12.34s ======================= 2025-12-04T12:05:39.7327635Z Got exit code 1 2025-12-04T12:05:39.7327885Z FAILED CONSISTENTLY: test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7328012Z Test failed consistently, continuing with the rest of the tests due to continue-through-error being set 2025-12-04T12:05:39.7328204Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-9ffbe4a89d4c388f.xml 2025-12-04T12:05:39.7328264Z ============================= test session starts ============================== 2025-12-04T12:05:39.7328381Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:05:39.7328423Z cachedir: .pytest_cache 2025-12-04T12:05:39.7328584Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:05:39.7328631Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:05:39.7328678Z configfile: pytest.ini 2025-12-04T12:05:39.7328842Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:05:39.7328916Z collecting ... collected 10 items / 2 deselected / 8 selected 2025-12-04T12:05:39.7328973Z stepcurrent: skipping 2 already run items. 2025-12-04T12:05:39.7329017Z Running 8 items in this shard 2025-12-04T12:05:39.7329019Z 2025-12-04T12:05:39.7329362Z distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda I1204 11:59:47.392000 418095 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 418164 2025-12-04T12:05:39.7329539Z I1204 11:59:47.393000 418095 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 418165 2025-12-04T12:05:39.7329695Z I1204 11:59:47.393000 418095 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 418166 2025-12-04T12:05:39.7329844Z I1204 11:59:47.394000 418095 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 418167 2025-12-04T12:05:39.7330207Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7330257Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7330797Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7330863Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7331218Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7331268Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7331748Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7331815Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7332196Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7332245Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7332721Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7332780Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7333129Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7333174Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7333653Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7333740Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7333883Z [rank3]:E1204 11:59:57.297000 418167 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7334045Z [rank3]:E1204 11:59:57.297000 418167 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7334338Z [rank3]:E1204 11:59:57.297000 418167 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7334495Z [rank3]:E1204 11:59:57.297000 418167 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7334778Z [rank3]:E1204 11:59:57.297000 418167 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7334908Z [rank3]:E1204 11:59:57.297000 418167 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7335182Z [rank3]:E1204 11:59:57.297000 418167 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7335333Z [rank3]:E1204 11:59:57.297000 418167 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7335608Z [rank3]:E1204 11:59:57.297000 418167 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7335754Z [rank3]:E1204 11:59:57.297000 418167 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7336071Z [rank3]:E1204 11:59:57.297000 418167 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7336207Z [rank3]:E1204 11:59:57.297000 418167 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7336503Z [rank3]:E1204 11:59:57.297000 418167 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7336650Z [rank3]:E1204 11:59:57.297000 418167 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7337163Z [rank3]:E1204 11:59:57.297000 418167 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 3. CUDA driver allocated memory was 2250244096 and is now 3097493504. 2025-12-04T12:05:39.7337282Z [rank3]:E1204 11:59:57.297000 418167 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7337482Z [rank3]:E1204 11:59:57.297000 418167 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7337878Z [rank3]:E1204 11:59:57.297000 418167 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7337992Z [rank3]:E1204 11:59:57.297000 418167 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7338230Z [rank3]:E1204 11:59:57.297000 418167 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7338394Z [rank3]:E1204 11:59:57.297000 418167 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:05:39.7338435Z dist init r=3, world=4 2025-12-04T12:05:39.7338577Z [rank1]:E1204 11:59:57.427000 418165 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7338736Z [rank1]:E1204 11:59:57.427000 418165 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7339023Z [rank1]:E1204 11:59:57.427000 418165 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7339176Z [rank1]:E1204 11:59:57.427000 418165 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7339458Z [rank1]:E1204 11:59:57.427000 418165 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7339582Z [rank1]:E1204 11:59:57.427000 418165 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7339859Z [rank1]:E1204 11:59:57.427000 418165 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7340006Z [rank1]:E1204 11:59:57.427000 418165 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7340282Z [rank1]:E1204 11:59:57.427000 418165 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7340430Z [rank1]:E1204 11:59:57.427000 418165 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7340764Z [rank1]:E1204 11:59:57.427000 418165 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7340903Z [rank1]:E1204 11:59:57.427000 418165 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7341177Z [rank1]:E1204 11:59:57.427000 418165 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7341328Z [rank1]:E1204 11:59:57.427000 418165 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7341839Z [rank1]:E1204 11:59:57.427000 418165 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 1. CUDA driver allocated memory was 2317352960 and is now 3164602368. 2025-12-04T12:05:39.7341953Z [rank1]:E1204 11:59:57.427000 418165 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7342150Z [rank1]:E1204 11:59:57.427000 418165 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7342575Z [rank1]:E1204 11:59:57.427000 418165 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7342689Z [rank1]:E1204 11:59:57.427000 418165 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7342900Z [rank1]:E1204 11:59:57.427000 418165 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7343066Z [rank1]:E1204 11:59:57.427000 418165 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:05:39.7343107Z dist init r=1, world=4 2025-12-04T12:05:39.7343247Z [rank2]:E1204 11:59:57.432000 418166 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7343407Z [rank2]:E1204 11:59:57.432000 418166 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7343689Z [rank2]:E1204 11:59:57.432000 418166 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7343844Z [rank2]:E1204 11:59:57.432000 418166 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7344124Z [rank2]:E1204 11:59:57.432000 418166 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7344249Z [rank2]:E1204 11:59:57.432000 418166 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7344524Z [rank2]:E1204 11:59:57.432000 418166 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7344672Z [rank2]:E1204 11:59:57.432000 418166 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7344966Z [rank2]:E1204 11:59:57.432000 418166 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7345111Z [rank2]:E1204 11:59:57.432000 418166 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7345385Z [rank2]:E1204 11:59:57.432000 418166 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7345521Z [rank2]:E1204 11:59:57.432000 418166 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7345799Z [rank2]:E1204 11:59:57.432000 418166 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7345947Z [rank2]:E1204 11:59:57.432000 418166 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7346452Z [rank2]:E1204 11:59:57.432000 418166 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 2. CUDA driver allocated memory was 2300575744 and is now 3147825152. 2025-12-04T12:05:39.7346588Z [rank2]:E1204 11:59:57.432000 418166 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7346782Z [rank2]:E1204 11:59:57.432000 418166 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7347177Z [rank2]:E1204 11:59:57.432000 418166 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7347289Z [rank2]:E1204 11:59:57.432000 418166 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7347501Z [rank2]:E1204 11:59:57.432000 418166 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7347664Z [rank2]:E1204 11:59:57.432000 418166 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:05:39.7347705Z dist init r=2, world=4 2025-12-04T12:05:39.7347840Z [rank0]:E1204 11:59:57.494000 418164 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7348003Z [rank0]:E1204 11:59:57.494000 418164 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7348289Z [rank0]:E1204 11:59:57.494000 418164 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7348441Z [rank0]:E1204 11:59:57.494000 418164 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7348724Z [rank0]:E1204 11:59:57.494000 418164 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7348848Z [rank0]:E1204 11:59:57.494000 418164 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7349144Z [rank0]:E1204 11:59:57.494000 418164 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7349290Z [rank0]:E1204 11:59:57.494000 418164 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7349565Z [rank0]:E1204 11:59:57.494000 418164 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7349715Z [rank0]:E1204 11:59:57.494000 418164 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7349987Z [rank0]:E1204 11:59:57.494000 418164 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7350125Z [rank0]:E1204 11:59:57.494000 418164 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7350401Z [rank0]:E1204 11:59:57.494000 418164 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7350550Z [rank0]:E1204 11:59:57.494000 418164 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7351091Z [rank0]:E1204 11:59:57.494000 418164 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 0. CUDA driver allocated memory was 2459959296 and is now 3307208704. 2025-12-04T12:05:39.7351239Z [rank0]:E1204 11:59:57.494000 418164 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7351432Z [rank0]:E1204 11:59:57.494000 418164 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7351821Z [rank0]:E1204 11:59:57.494000 418164 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7351937Z [rank0]:E1204 11:59:57.494000 418164 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7352144Z [rank0]:E1204 11:59:57.494000 418164 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7352310Z [rank0]:E1204 11:59:57.494000 418164 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:05:39.7352349Z dist init r=0, world=4 2025-12-04T12:05:39.7352687Z [rank0]:[W1204 11:59:57.616276791 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:05:39.7352730Z FAILED [11.8263s] [ 12%] 2025-12-04T12:05:39.7352732Z 2025-12-04T12:05:39.7352791Z =================================== FAILURES =================================== 2025-12-04T12:05:39.7352928Z _ TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda _ 2025-12-04T12:05:39.7352975Z Traceback (most recent call last): 2025-12-04T12:05:39.7353141Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:05:39.7353185Z self._join_processes(fn) 2025-12-04T12:05:39.7353388Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:05:39.7353443Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:05:39.7353621Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:05:39.7353665Z raise RuntimeError(error) 2025-12-04T12:05:39.7353748Z RuntimeError: Process 3 exited with error code 10 and exception: 2025-12-04T12:05:39.7353794Z Traceback (most recent call last): 2025-12-04T12:05:39.7353956Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7353999Z getattr(self, test_name)() 2025-12-04T12:05:39.7354159Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7354196Z fn() 2025-12-04T12:05:39.7354349Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7354389Z method(*args, **kwargs) 2025-12-04T12:05:39.7354541Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7354582Z method(*args, **kwargs) 2025-12-04T12:05:39.7354732Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7354795Z with policy(): 2025-12-04T12:05:39.7354947Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7354990Z raise RuntimeError(msg) 2025-12-04T12:05:39.7355378Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 3. CUDA driver allocated memory was 2250244096 and is now 3097493504. 2025-12-04T12:05:39.7355380Z 2025-12-04T12:05:39.7355459Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7355726Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7355730Z 2025-12-04T12:05:39.7355821Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7355824Z 2025-12-04T12:05:39.7355826Z 2025-12-04T12:05:39.7355903Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:05:39.7355992Z Process 3 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:05:39.7356228Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-9ffbe4a89d4c388f.xml - 2025-12-04T12:05:39.7356289Z =========================== short test summary info ============================ 2025-12-04T12:05:39.7356571Z FAILED [11.8263s] distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda - RuntimeError: Process 3 exited with error code 10 and exception: 2025-12-04T12:05:39.7356617Z Traceback (most recent call last): 2025-12-04T12:05:39.7356782Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7356825Z getattr(self, test_name)() 2025-12-04T12:05:39.7356982Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7357019Z fn() 2025-12-04T12:05:39.7357190Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7357234Z method(*args, **kwargs) 2025-12-04T12:05:39.7357384Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7357423Z method(*args, **kwargs) 2025-12-04T12:05:39.7357577Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7357616Z with policy(): 2025-12-04T12:05:39.7357770Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7357811Z raise RuntimeError(msg) 2025-12-04T12:05:39.7358196Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 3. CUDA driver allocated memory was 2250244096 and is now 3097493504. 2025-12-04T12:05:39.7358198Z 2025-12-04T12:05:39.7358274Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7358539Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7358542Z 2025-12-04T12:05:39.7358631Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7358722Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:05:39.7358788Z ======================= 1 failed, 2 deselected in 11.84s ======================= 2025-12-04T12:05:39.7358825Z Got exit code 1 2025-12-04T12:05:39.7358868Z Retrying single test... 2025-12-04T12:05:39.7359058Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-f8eebc2542d0fa82.xml 2025-12-04T12:05:39.7359119Z ============================= test session starts ============================== 2025-12-04T12:05:39.7359234Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:05:39.7359277Z cachedir: .pytest_cache 2025-12-04T12:05:39.7359432Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:05:39.7359483Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:05:39.7359523Z configfile: pytest.ini 2025-12-04T12:05:39.7359686Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:05:39.7359759Z collecting ... collected 10 items / 9 deselected / 1 selected 2025-12-04T12:05:39.7360023Z stepcurrent: skipping 2 already run items. Running only test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7360070Z Running 1 items in this shard 2025-12-04T12:05:39.7360072Z 2025-12-04T12:05:39.7360414Z distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda I1204 12:00:01.908000 418497 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 418566 2025-12-04T12:05:39.7360571Z I1204 12:00:01.908000 418497 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 418567 2025-12-04T12:05:39.7360758Z I1204 12:00:01.909000 418497 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 418568 2025-12-04T12:05:39.7360910Z I1204 12:00:01.910000 418497 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 418569 2025-12-04T12:05:39.7361315Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7361367Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7361855Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7361919Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7362271Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7362318Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7362798Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7362888Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7363237Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7363284Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7363763Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7363823Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7364171Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7364216Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7364695Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7364754Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7364896Z [rank1]:E1204 12:00:11.975000 418567 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7365057Z [rank1]:E1204 12:00:11.975000 418567 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7365348Z [rank1]:E1204 12:00:11.975000 418567 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7365520Z [rank1]:E1204 12:00:11.975000 418567 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7365831Z [rank1]:E1204 12:00:11.975000 418567 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7365956Z [rank1]:E1204 12:00:11.975000 418567 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7366229Z [rank1]:E1204 12:00:11.975000 418567 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7366690Z [rank1]:E1204 12:00:11.975000 418567 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7366965Z [rank1]:E1204 12:00:11.975000 418567 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7367113Z [rank1]:E1204 12:00:11.975000 418567 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7367385Z [rank1]:E1204 12:00:11.975000 418567 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7367545Z [rank1]:E1204 12:00:11.975000 418567 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7367819Z [rank1]:E1204 12:00:11.975000 418567 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7367967Z [rank1]:E1204 12:00:11.975000 418567 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7368475Z [rank1]:E1204 12:00:11.975000 418567 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 1. CUDA driver allocated memory was 2317352960 and is now 3164602368. 2025-12-04T12:05:39.7368591Z [rank1]:E1204 12:00:11.975000 418567 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7368784Z [rank1]:E1204 12:00:11.975000 418567 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7369184Z [rank1]:E1204 12:00:11.975000 418567 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7369295Z [rank1]:E1204 12:00:11.975000 418567 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7369505Z [rank1]:E1204 12:00:11.975000 418567 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7369669Z [rank1]:E1204 12:00:11.975000 418567 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:05:39.7369710Z dist init r=1, world=4 2025-12-04T12:05:39.7369846Z [rank0]:E1204 12:00:12.042000 418566 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7370026Z [rank0]:E1204 12:00:12.042000 418566 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7370309Z [rank0]:E1204 12:00:12.042000 418566 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7370462Z [rank0]:E1204 12:00:12.042000 418566 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7370795Z [rank0]:E1204 12:00:12.042000 418566 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7370919Z [rank0]:E1204 12:00:12.042000 418566 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7371197Z [rank0]:E1204 12:00:12.042000 418566 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7371344Z [rank0]:E1204 12:00:12.042000 418566 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7371618Z [rank0]:E1204 12:00:12.042000 418566 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7371795Z [rank0]:E1204 12:00:12.042000 418566 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7372069Z [rank0]:E1204 12:00:12.042000 418566 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7372206Z [rank0]:E1204 12:00:12.042000 418566 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7372479Z [rank0]:E1204 12:00:12.042000 418566 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7372626Z [rank0]:E1204 12:00:12.042000 418566 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7373129Z [rank0]:E1204 12:00:12.042000 418566 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 0. CUDA driver allocated memory was 2462056448 and is now 3307208704. 2025-12-04T12:05:39.7373246Z [rank0]:E1204 12:00:12.042000 418566 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7373440Z [rank0]:E1204 12:00:12.042000 418566 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7373831Z [rank0]:E1204 12:00:12.042000 418566 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7373944Z [rank0]:E1204 12:00:12.042000 418566 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7374155Z [rank0]:E1204 12:00:12.042000 418566 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7374349Z [rank0]:E1204 12:00:12.042000 418566 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:05:39.7374388Z dist init r=0, world=4 2025-12-04T12:05:39.7374530Z [rank3]:E1204 12:00:12.076000 418569 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7374687Z [rank3]:E1204 12:00:12.076000 418569 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7374970Z [rank3]:E1204 12:00:12.076000 418569 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7375125Z [rank3]:E1204 12:00:12.076000 418569 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7375409Z [rank3]:E1204 12:00:12.076000 418569 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7375531Z [rank3]:E1204 12:00:12.076000 418569 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7375806Z [rank3]:E1204 12:00:12.076000 418569 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7375952Z [rank3]:E1204 12:00:12.076000 418569 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7376244Z [rank3]:E1204 12:00:12.076000 418569 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7376391Z [rank3]:E1204 12:00:12.076000 418569 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7376661Z [rank3]:E1204 12:00:12.076000 418569 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7376797Z [rank3]:E1204 12:00:12.076000 418569 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7377070Z [rank3]:E1204 12:00:12.076000 418569 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7377220Z [rank3]:E1204 12:00:12.076000 418569 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7377728Z [rank3]:E1204 12:00:12.076000 418569 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 3. CUDA driver allocated memory was 2250244096 and is now 3097493504. 2025-12-04T12:05:39.7377840Z [rank3]:E1204 12:00:12.076000 418569 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7378033Z [rank3]:E1204 12:00:12.076000 418569 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7378423Z [rank3]:E1204 12:00:12.076000 418569 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7378555Z [rank3]:E1204 12:00:12.076000 418569 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7378761Z [rank3]:E1204 12:00:12.076000 418569 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7378924Z [rank3]:E1204 12:00:12.076000 418569 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:05:39.7378963Z dist init r=3, world=4 2025-12-04T12:05:39.7379100Z [rank2]:E1204 12:00:12.133000 418568 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7379261Z [rank2]:E1204 12:00:12.133000 418568 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7379548Z [rank2]:E1204 12:00:12.133000 418568 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7379700Z [rank2]:E1204 12:00:12.133000 418568 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7379979Z [rank2]:E1204 12:00:12.133000 418568 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7380102Z [rank2]:E1204 12:00:12.133000 418568 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7380394Z [rank2]:E1204 12:00:12.133000 418568 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7380541Z [rank2]:E1204 12:00:12.133000 418568 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7380845Z [rank2]:E1204 12:00:12.133000 418568 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7380990Z [rank2]:E1204 12:00:12.133000 418568 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7381261Z [rank2]:E1204 12:00:12.133000 418568 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7381396Z [rank2]:E1204 12:00:12.133000 418568 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7381673Z [rank2]:E1204 12:00:12.133000 418568 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7381818Z [rank2]:E1204 12:00:12.133000 418568 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7382322Z [rank2]:E1204 12:00:12.133000 418568 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 2. CUDA driver allocated memory was 2300575744 and is now 3147825152. 2025-12-04T12:05:39.7382438Z [rank2]:E1204 12:00:12.133000 418568 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7382630Z [rank2]:E1204 12:00:12.133000 418568 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7383050Z [rank2]:E1204 12:00:12.133000 418568 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7383162Z [rank2]:E1204 12:00:12.133000 418568 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7383371Z [rank2]:E1204 12:00:12.133000 418568 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7383533Z [rank2]:E1204 12:00:12.133000 418568 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:05:39.7383573Z dist init r=2, world=4 2025-12-04T12:05:39.7383911Z [rank0]:[W1204 12:00:12.128452693 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:05:39.7383952Z FAILED [12.1270s] [100%] 2025-12-04T12:05:39.7383954Z 2025-12-04T12:05:39.7384013Z =================================== FAILURES =================================== 2025-12-04T12:05:39.7384145Z _ TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda _ 2025-12-04T12:05:39.7384191Z Traceback (most recent call last): 2025-12-04T12:05:39.7384381Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:05:39.7384425Z self._join_processes(fn) 2025-12-04T12:05:39.7384597Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:05:39.7384654Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:05:39.7384833Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:05:39.7384878Z raise RuntimeError(error) 2025-12-04T12:05:39.7384956Z RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7385001Z Traceback (most recent call last): 2025-12-04T12:05:39.7385160Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7385208Z getattr(self, test_name)() 2025-12-04T12:05:39.7385365Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7385402Z fn() 2025-12-04T12:05:39.7385552Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7385595Z method(*args, **kwargs) 2025-12-04T12:05:39.7385744Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7385786Z method(*args, **kwargs) 2025-12-04T12:05:39.7385936Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7385974Z with policy(): 2025-12-04T12:05:39.7386128Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7386170Z raise RuntimeError(msg) 2025-12-04T12:05:39.7386552Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 0. CUDA driver allocated memory was 2462056448 and is now 3307208704. 2025-12-04T12:05:39.7386554Z 2025-12-04T12:05:39.7386629Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7386918Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7386920Z 2025-12-04T12:05:39.7387007Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7387009Z 2025-12-04T12:05:39.7387013Z 2025-12-04T12:05:39.7387091Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:05:39.7387183Z Process 0 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:05:39.7387418Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-f8eebc2542d0fa82.xml - 2025-12-04T12:05:39.7387484Z =========================== short test summary info ============================ 2025-12-04T12:05:39.7387768Z FAILED [12.1270s] distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda - RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7387819Z Traceback (most recent call last): 2025-12-04T12:05:39.7387981Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7388027Z getattr(self, test_name)() 2025-12-04T12:05:39.7388187Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7388247Z fn() 2025-12-04T12:05:39.7388398Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7388441Z method(*args, **kwargs) 2025-12-04T12:05:39.7388591Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7388637Z method(*args, **kwargs) 2025-12-04T12:05:39.7388786Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7388826Z with policy(): 2025-12-04T12:05:39.7388974Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7389016Z raise RuntimeError(msg) 2025-12-04T12:05:39.7389399Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 0. CUDA driver allocated memory was 2462056448 and is now 3307208704. 2025-12-04T12:05:39.7389404Z 2025-12-04T12:05:39.7389477Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7389745Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7389747Z 2025-12-04T12:05:39.7389833Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7389897Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:05:39.7389958Z ======================= 1 failed, 9 deselected in 12.14s ======================= 2025-12-04T12:05:39.7389998Z Got exit code 1 2025-12-04T12:05:39.7390038Z Retrying single test... 2025-12-04T12:05:39.7390224Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-4e9f3a61287ad2a6.xml 2025-12-04T12:05:39.7390280Z ============================= test session starts ============================== 2025-12-04T12:05:39.7390396Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:05:39.7390464Z cachedir: .pytest_cache 2025-12-04T12:05:39.7390656Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:05:39.7390703Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:05:39.7390747Z configfile: pytest.ini 2025-12-04T12:05:39.7390909Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:05:39.7390984Z collecting ... collected 10 items / 9 deselected / 1 selected 2025-12-04T12:05:39.7391248Z stepcurrent: skipping 2 already run items. Running only test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7391291Z Running 1 items in this shard 2025-12-04T12:05:39.7391293Z 2025-12-04T12:05:39.7391636Z distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda I1204 12:00:16.826000 418899 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 418968 2025-12-04T12:05:39.7391789Z I1204 12:00:16.827000 418899 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 418969 2025-12-04T12:05:39.7391942Z I1204 12:00:16.828000 418899 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 418970 2025-12-04T12:05:39.7392089Z I1204 12:00:16.828000 418899 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 418971 2025-12-04T12:05:39.7392478Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7392526Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7393014Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7393078Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7393431Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7393483Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7393967Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7394028Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7394380Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7394429Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7394935Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7394993Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7395344Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7395392Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7395875Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7395938Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7396079Z [rank3]:E1204 12:00:26.894000 418971 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7396241Z [rank3]:E1204 12:00:26.894000 418971 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7396526Z [rank3]:E1204 12:00:26.894000 418971 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7396703Z [rank3]:E1204 12:00:26.894000 418971 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7396985Z [rank3]:E1204 12:00:26.894000 418971 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7397109Z [rank3]:E1204 12:00:26.894000 418971 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7397386Z [rank3]:E1204 12:00:26.894000 418971 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7397534Z [rank3]:E1204 12:00:26.894000 418971 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7397813Z [rank3]:E1204 12:00:26.894000 418971 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7397959Z [rank3]:E1204 12:00:26.894000 418971 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7398235Z [rank3]:E1204 12:00:26.894000 418971 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7398369Z [rank3]:E1204 12:00:26.894000 418971 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7398645Z [rank3]:E1204 12:00:26.894000 418971 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7398792Z [rank3]:E1204 12:00:26.894000 418971 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7399318Z [rank3]:E1204 12:00:26.894000 418971 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 3. CUDA driver allocated memory was 2250244096 and is now 3097493504. 2025-12-04T12:05:39.7399434Z [rank3]:E1204 12:00:26.894000 418971 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7399629Z [rank3]:E1204 12:00:26.894000 418971 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7400022Z [rank3]:E1204 12:00:26.894000 418971 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7400135Z [rank3]:E1204 12:00:26.894000 418971 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7400344Z [rank3]:E1204 12:00:26.894000 418971 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7400508Z [rank3]:E1204 12:00:26.894000 418971 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:05:39.7400546Z dist init r=3, world=4 2025-12-04T12:05:39.7400717Z [rank0]:E1204 12:00:27.020000 418968 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7400902Z [rank0]:E1204 12:00:27.020000 418968 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7401185Z [rank0]:E1204 12:00:27.020000 418968 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7401339Z [rank0]:E1204 12:00:27.020000 418968 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7401620Z [rank0]:E1204 12:00:27.020000 418968 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7401740Z [rank0]:E1204 12:00:27.020000 418968 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7402014Z [rank0]:E1204 12:00:27.020000 418968 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7402160Z [rank0]:E1204 12:00:27.020000 418968 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7402435Z [rank0]:E1204 12:00:27.020000 418968 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7402585Z [rank0]:E1204 12:00:27.020000 418968 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7402856Z [rank0]:E1204 12:00:27.020000 418968 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7402996Z [rank0]:E1204 12:00:27.020000 418968 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7403268Z [rank0]:E1204 12:00:27.020000 418968 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7403479Z [rank0]:E1204 12:00:27.020000 418968 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7403986Z [rank0]:E1204 12:00:27.020000 418968 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 0. CUDA driver allocated memory was 2459959296 and is now 3307208704. 2025-12-04T12:05:39.7404101Z [rank0]:E1204 12:00:27.020000 418968 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7404296Z [rank0]:E1204 12:00:27.020000 418968 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7404686Z [rank0]:E1204 12:00:27.020000 418968 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7404799Z [rank0]:E1204 12:00:27.020000 418968 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7405008Z [rank0]:E1204 12:00:27.020000 418968 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7405196Z [rank0]:E1204 12:00:27.020000 418968 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:05:39.7405235Z dist init r=0, world=4 2025-12-04T12:05:39.7405371Z [rank1]:E1204 12:00:27.032000 418969 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7405531Z [rank1]:E1204 12:00:27.032000 418969 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7405814Z [rank1]:E1204 12:00:27.032000 418969 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7405966Z [rank1]:E1204 12:00:27.032000 418969 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7406245Z [rank1]:E1204 12:00:27.032000 418969 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7406369Z [rank1]:E1204 12:00:27.032000 418969 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7406645Z [rank1]:E1204 12:00:27.032000 418969 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7406791Z [rank1]:E1204 12:00:27.032000 418969 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7407063Z [rank1]:E1204 12:00:27.032000 418969 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7407210Z [rank1]:E1204 12:00:27.032000 418969 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7407483Z [rank1]:E1204 12:00:27.032000 418969 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7407638Z [rank1]:E1204 12:00:27.032000 418969 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7407914Z [rank1]:E1204 12:00:27.032000 418969 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7408060Z [rank1]:E1204 12:00:27.032000 418969 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7408565Z [rank1]:E1204 12:00:27.032000 418969 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 1. CUDA driver allocated memory was 2317352960 and is now 3164602368. 2025-12-04T12:05:39.7408681Z [rank1]:E1204 12:00:27.032000 418969 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7408874Z [rank1]:E1204 12:00:27.032000 418969 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7409268Z [rank1]:E1204 12:00:27.032000 418969 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7409400Z [rank1]:E1204 12:00:27.032000 418969 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7409610Z [rank1]:E1204 12:00:27.032000 418969 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7409771Z [rank1]:E1204 12:00:27.032000 418969 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:05:39.7409813Z dist init r=1, world=4 2025-12-04T12:05:39.7409952Z [rank2]:E1204 12:00:27.123000 418970 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7410108Z [rank2]:E1204 12:00:27.123000 418970 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7410393Z [rank2]:E1204 12:00:27.123000 418970 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7410547Z [rank2]:E1204 12:00:27.123000 418970 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7410863Z [rank2]:E1204 12:00:27.123000 418970 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7410987Z [rank2]:E1204 12:00:27.123000 418970 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7411263Z [rank2]:E1204 12:00:27.123000 418970 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7411410Z [rank2]:E1204 12:00:27.123000 418970 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7411682Z [rank2]:E1204 12:00:27.123000 418970 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7411853Z [rank2]:E1204 12:00:27.123000 418970 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7412124Z [rank2]:E1204 12:00:27.123000 418970 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7412259Z [rank2]:E1204 12:00:27.123000 418970 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7412533Z [rank2]:E1204 12:00:27.123000 418970 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7412679Z [rank2]:E1204 12:00:27.123000 418970 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7413187Z [rank2]:E1204 12:00:27.123000 418970 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 2. CUDA driver allocated memory was 2300575744 and is now 3147825152. 2025-12-04T12:05:39.7413299Z [rank2]:E1204 12:00:27.123000 418970 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7413491Z [rank2]:E1204 12:00:27.123000 418970 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7413908Z [rank2]:E1204 12:00:27.123000 418970 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7414023Z [rank2]:E1204 12:00:27.123000 418970 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7414230Z [rank2]:E1204 12:00:27.123000 418970 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7414393Z [rank2]:E1204 12:00:27.123000 418970 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:05:39.7414431Z dist init r=2, world=4 2025-12-04T12:05:39.7414769Z [rank0]:[W1204 12:00:27.128172881 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:05:39.7414813Z FAILED [12.0253s] [100%] 2025-12-04T12:05:39.7414815Z 2025-12-04T12:05:39.7414870Z =================================== FAILURES =================================== 2025-12-04T12:05:39.7415007Z _ TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda _ 2025-12-04T12:05:39.7415052Z Traceback (most recent call last): 2025-12-04T12:05:39.7415214Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:05:39.7415256Z self._join_processes(fn) 2025-12-04T12:05:39.7415429Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:05:39.7415484Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:05:39.7415661Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:05:39.7415703Z raise RuntimeError(error) 2025-12-04T12:05:39.7415783Z RuntimeError: Process 3 exited with error code 10 and exception: 2025-12-04T12:05:39.7415847Z Traceback (most recent call last): 2025-12-04T12:05:39.7416008Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7416050Z getattr(self, test_name)() 2025-12-04T12:05:39.7416208Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7416241Z fn() 2025-12-04T12:05:39.7416392Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7416435Z method(*args, **kwargs) 2025-12-04T12:05:39.7416585Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7416626Z method(*args, **kwargs) 2025-12-04T12:05:39.7416773Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7416813Z with policy(): 2025-12-04T12:05:39.7416963Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7417004Z raise RuntimeError(msg) 2025-12-04T12:05:39.7417385Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 3. CUDA driver allocated memory was 2250244096 and is now 3097493504. 2025-12-04T12:05:39.7417411Z 2025-12-04T12:05:39.7417488Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7417758Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7417760Z 2025-12-04T12:05:39.7417850Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7417853Z 2025-12-04T12:05:39.7417854Z 2025-12-04T12:05:39.7417930Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:05:39.7418017Z Process 3 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:05:39.7418249Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-4e9f3a61287ad2a6.xml - 2025-12-04T12:05:39.7418311Z =========================== short test summary info ============================ 2025-12-04T12:05:39.7418590Z FAILED [12.0253s] distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda - RuntimeError: Process 3 exited with error code 10 and exception: 2025-12-04T12:05:39.7418635Z Traceback (most recent call last): 2025-12-04T12:05:39.7418798Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7418840Z getattr(self, test_name)() 2025-12-04T12:05:39.7418996Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7419029Z fn() 2025-12-04T12:05:39.7419179Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7419220Z method(*args, **kwargs) 2025-12-04T12:05:39.7419370Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7419409Z method(*args, **kwargs) 2025-12-04T12:05:39.7419558Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7419594Z with policy(): 2025-12-04T12:05:39.7419763Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7419804Z raise RuntimeError(msg) 2025-12-04T12:05:39.7420185Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 3. CUDA driver allocated memory was 2250244096 and is now 3097493504. 2025-12-04T12:05:39.7420189Z 2025-12-04T12:05:39.7420264Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7420530Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7420532Z 2025-12-04T12:05:39.7420656Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7420721Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:05:39.7420782Z ======================= 1 failed, 9 deselected in 12.04s ======================= 2025-12-04T12:05:39.7420819Z Got exit code 1 2025-12-04T12:05:39.7421035Z FAILED CONSISTENTLY: test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7421162Z Test failed consistently, continuing with the rest of the tests due to continue-through-error being set 2025-12-04T12:05:39.7421392Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-b81c3dcad1b2b398.xml 2025-12-04T12:05:39.7421450Z ============================= test session starts ============================== 2025-12-04T12:05:39.7421561Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:05:39.7421604Z cachedir: .pytest_cache 2025-12-04T12:05:39.7421759Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:05:39.7421805Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:05:39.7421844Z configfile: pytest.ini 2025-12-04T12:05:39.7422004Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:05:39.7422075Z collecting ... collected 10 items / 3 deselected / 7 selected 2025-12-04T12:05:39.7422132Z stepcurrent: skipping 3 already run items. 2025-12-04T12:05:39.7422174Z Running 7 items in this shard 2025-12-04T12:05:39.7422176Z 2025-12-04T12:05:39.7422516Z distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda I1204 12:00:31.685000 419301 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 419370 2025-12-04T12:05:39.7422670Z I1204 12:00:31.685000 419301 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 419371 2025-12-04T12:05:39.7422820Z I1204 12:00:31.686000 419301 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 419372 2025-12-04T12:05:39.7422967Z I1204 12:00:31.687000 419301 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 419373 2025-12-04T12:05:39.7423322Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7423373Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7423746Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7423793Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7424275Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7424339Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7424817Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7424877Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7425226Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7425272Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7425773Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7425832Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7426181Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7426226Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7426705Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7426766Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7426907Z [rank2]:E1204 12:00:41.841000 419372 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7427068Z [rank2]:E1204 12:00:41.841000 419372 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7427355Z [rank2]:E1204 12:00:41.841000 419372 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7427508Z [rank2]:E1204 12:00:41.841000 419372 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7427792Z [rank2]:E1204 12:00:41.841000 419372 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7427917Z [rank2]:E1204 12:00:41.841000 419372 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7428211Z [rank2]:E1204 12:00:41.841000 419372 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7428356Z [rank2]:E1204 12:00:41.841000 419372 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7428628Z [rank2]:E1204 12:00:41.841000 419372 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7428775Z [rank2]:E1204 12:00:41.841000 419372 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7429052Z [rank2]:E1204 12:00:41.841000 419372 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7429186Z [rank2]:E1204 12:00:41.841000 419372 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7429461Z [rank2]:E1204 12:00:41.841000 419372 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7429608Z [rank2]:E1204 12:00:41.841000 419372 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7430137Z [rank2]:E1204 12:00:41.841000 419372 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 2. CUDA driver allocated memory was 2300575744 and is now 3147825152. 2025-12-04T12:05:39.7430253Z [rank2]:E1204 12:00:41.841000 419372 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7430447Z [rank2]:E1204 12:00:41.841000 419372 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7430877Z [rank2]:E1204 12:00:41.841000 419372 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7430991Z [rank2]:E1204 12:00:41.841000 419372 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7431201Z [rank2]:E1204 12:00:41.841000 419372 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7431363Z [rank2]:E1204 12:00:41.841000 419372 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:05:39.7431402Z dist init r=2, world=4 2025-12-04T12:05:39.7431540Z [rank0]:E1204 12:00:41.866000 419370 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7431697Z [rank0]:E1204 12:00:41.866000 419370 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7431984Z [rank0]:E1204 12:00:41.866000 419370 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7432135Z [rank0]:E1204 12:00:41.866000 419370 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7432446Z [rank0]:E1204 12:00:41.866000 419370 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7432569Z [rank0]:E1204 12:00:41.866000 419370 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7432849Z [rank0]:E1204 12:00:41.866000 419370 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7432997Z [rank0]:E1204 12:00:41.866000 419370 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7433271Z [rank0]:E1204 12:00:41.866000 419370 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7433416Z [rank0]:E1204 12:00:41.866000 419370 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7433687Z [rank0]:E1204 12:00:41.866000 419370 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7433821Z [rank0]:E1204 12:00:41.866000 419370 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7434122Z [rank0]:E1204 12:00:41.866000 419370 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7434269Z [rank0]:E1204 12:00:41.866000 419370 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7434776Z [rank0]:E1204 12:00:41.866000 419370 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 0. CUDA driver allocated memory was 2459959296 and is now 3307208704. 2025-12-04T12:05:39.7434889Z [rank0]:E1204 12:00:41.866000 419370 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7435085Z [rank0]:E1204 12:00:41.866000 419370 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7435476Z [rank0]:E1204 12:00:41.866000 419370 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7435589Z [rank0]:E1204 12:00:41.866000 419370 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7435794Z [rank0]:E1204 12:00:41.866000 419370 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7435957Z [rank0]:E1204 12:00:41.866000 419370 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:05:39.7435997Z dist init r=0, world=4 2025-12-04T12:05:39.7436133Z [rank3]:E1204 12:00:41.877000 419373 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7436291Z [rank3]:E1204 12:00:41.877000 419373 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7436598Z [rank3]:E1204 12:00:41.877000 419373 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7436750Z [rank3]:E1204 12:00:41.877000 419373 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7437029Z [rank3]:E1204 12:00:41.877000 419373 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7437153Z [rank3]:E1204 12:00:41.877000 419373 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7437426Z [rank3]:E1204 12:00:41.877000 419373 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7437572Z [rank3]:E1204 12:00:41.877000 419373 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7437843Z [rank3]:E1204 12:00:41.877000 419373 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7437987Z [rank3]:E1204 12:00:41.877000 419373 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7438305Z [rank3]:E1204 12:00:41.877000 419373 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7438438Z [rank3]:E1204 12:00:41.877000 419373 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7438713Z [rank3]:E1204 12:00:41.877000 419373 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7438858Z [rank3]:E1204 12:00:41.877000 419373 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7439361Z [rank3]:E1204 12:00:41.877000 419373 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 3. CUDA driver allocated memory was 2250244096 and is now 3097493504. 2025-12-04T12:05:39.7439476Z [rank3]:E1204 12:00:41.877000 419373 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7439670Z [rank3]:E1204 12:00:41.877000 419373 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7440062Z [rank3]:E1204 12:00:41.877000 419373 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7440172Z [rank3]:E1204 12:00:41.877000 419373 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7440382Z [rank3]:E1204 12:00:41.877000 419373 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7440545Z [rank3]:E1204 12:00:41.877000 419373 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:05:39.7440582Z dist init r=3, world=4 2025-12-04T12:05:39.7440804Z [rank1]:E1204 12:00:41.922000 419371 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7440961Z [rank1]:E1204 12:00:41.922000 419371 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7441246Z [rank1]:E1204 12:00:41.922000 419371 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7441398Z [rank1]:E1204 12:00:41.922000 419371 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7441678Z [rank1]:E1204 12:00:41.922000 419371 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7441800Z [rank1]:E1204 12:00:41.922000 419371 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7442072Z [rank1]:E1204 12:00:41.922000 419371 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7442218Z [rank1]:E1204 12:00:41.922000 419371 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7442516Z [rank1]:E1204 12:00:41.922000 419371 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7442663Z [rank1]:E1204 12:00:41.922000 419371 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7442934Z [rank1]:E1204 12:00:41.922000 419371 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7443069Z [rank1]:E1204 12:00:41.922000 419371 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7443342Z [rank1]:E1204 12:00:41.922000 419371 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7443492Z [rank1]:E1204 12:00:41.922000 419371 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7443996Z [rank1]:E1204 12:00:41.922000 419371 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 1. CUDA driver allocated memory was 2317352960 and is now 3164602368. 2025-12-04T12:05:39.7444108Z [rank1]:E1204 12:00:41.922000 419371 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7444303Z [rank1]:E1204 12:00:41.922000 419371 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7444693Z [rank1]:E1204 12:00:41.922000 419371 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7444808Z [rank1]:E1204 12:00:41.922000 419371 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7445034Z [rank1]:E1204 12:00:41.922000 419371 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7445197Z [rank1]:E1204 12:00:41.922000 419371 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:05:39.7445236Z dist init r=1, world=4 2025-12-04T12:05:39.7445566Z [rank0]:[W1204 12:00:42.984177252 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:05:39.7445609Z FAILED [12.2276s] [ 14%] 2025-12-04T12:05:39.7445611Z 2025-12-04T12:05:39.7445665Z =================================== FAILURES =================================== 2025-12-04T12:05:39.7445840Z _ TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda _ 2025-12-04T12:05:39.7445887Z Traceback (most recent call last): 2025-12-04T12:05:39.7446049Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:05:39.7446090Z self._join_processes(fn) 2025-12-04T12:05:39.7446263Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:05:39.7446315Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:05:39.7446491Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:05:39.7446564Z raise RuntimeError(error) 2025-12-04T12:05:39.7446645Z RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7446688Z Traceback (most recent call last): 2025-12-04T12:05:39.7446853Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7446895Z getattr(self, test_name)() 2025-12-04T12:05:39.7447052Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7447087Z fn() 2025-12-04T12:05:39.7447237Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7447279Z method(*args, **kwargs) 2025-12-04T12:05:39.7447427Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7447470Z method(*args, **kwargs) 2025-12-04T12:05:39.7447617Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7447656Z with policy(): 2025-12-04T12:05:39.7447806Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7447847Z raise RuntimeError(msg) 2025-12-04T12:05:39.7448227Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 0. CUDA driver allocated memory was 2459959296 and is now 3307208704. 2025-12-04T12:05:39.7448230Z 2025-12-04T12:05:39.7448307Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7448577Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7448579Z 2025-12-04T12:05:39.7448668Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7448669Z 2025-12-04T12:05:39.7448751Z Process 2 exited with error code 10 and exception: 2025-12-04T12:05:39.7448798Z Traceback (most recent call last): 2025-12-04T12:05:39.7448959Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7449000Z getattr(self, test_name)() 2025-12-04T12:05:39.7449159Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7449193Z fn() 2025-12-04T12:05:39.7449344Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7449383Z method(*args, **kwargs) 2025-12-04T12:05:39.7449532Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7449572Z method(*args, **kwargs) 2025-12-04T12:05:39.7449722Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7449758Z with policy(): 2025-12-04T12:05:39.7449908Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7449948Z raise RuntimeError(msg) 2025-12-04T12:05:39.7450328Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 2. CUDA driver allocated memory was 2300575744 and is now 3147825152. 2025-12-04T12:05:39.7450352Z 2025-12-04T12:05:39.7450426Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7450737Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7450740Z 2025-12-04T12:05:39.7450831Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7450834Z 2025-12-04T12:05:39.7450891Z Process 3 exited with error code 10 and exception: 2025-12-04T12:05:39.7450937Z Traceback (most recent call last): 2025-12-04T12:05:39.7451098Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7451140Z getattr(self, test_name)() 2025-12-04T12:05:39.7451298Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7451335Z fn() 2025-12-04T12:05:39.7451484Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7451525Z method(*args, **kwargs) 2025-12-04T12:05:39.7451675Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7451716Z method(*args, **kwargs) 2025-12-04T12:05:39.7451864Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7451901Z with policy(): 2025-12-04T12:05:39.7452051Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7452094Z raise RuntimeError(msg) 2025-12-04T12:05:39.7452477Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 3. CUDA driver allocated memory was 2250244096 and is now 3097493504. 2025-12-04T12:05:39.7452480Z 2025-12-04T12:05:39.7452554Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7452860Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7452862Z 2025-12-04T12:05:39.7452948Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7452950Z 2025-12-04T12:05:39.7452952Z 2025-12-04T12:05:39.7453028Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:05:39.7453118Z Process 0 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:05:39.7453351Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-b81c3dcad1b2b398.xml - 2025-12-04T12:05:39.7453411Z =========================== short test summary info ============================ 2025-12-04T12:05:39.7453694Z FAILED [12.2276s] distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda - RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7453741Z Traceback (most recent call last): 2025-12-04T12:05:39.7453904Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7453949Z getattr(self, test_name)() 2025-12-04T12:05:39.7454105Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7454175Z fn() 2025-12-04T12:05:39.7454323Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7454365Z method(*args, **kwargs) 2025-12-04T12:05:39.7454513Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7454556Z method(*args, **kwargs) 2025-12-04T12:05:39.7454703Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7454740Z with policy(): 2025-12-04T12:05:39.7454888Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7454932Z raise RuntimeError(msg) 2025-12-04T12:05:39.7455313Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 0. CUDA driver allocated memory was 2459959296 and is now 3307208704. 2025-12-04T12:05:39.7455316Z 2025-12-04T12:05:39.7455391Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7455657Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7455661Z 2025-12-04T12:05:39.7455747Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7455748Z 2025-12-04T12:05:39.7455807Z Process 2 exited with error code 10 and exception: 2025-12-04T12:05:39.7455851Z Traceback (most recent call last): 2025-12-04T12:05:39.7456014Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7456057Z getattr(self, test_name)() 2025-12-04T12:05:39.7456213Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7456247Z fn() 2025-12-04T12:05:39.7456395Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7456456Z method(*args, **kwargs) 2025-12-04T12:05:39.7456607Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7456646Z method(*args, **kwargs) 2025-12-04T12:05:39.7456795Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7456831Z with policy(): 2025-12-04T12:05:39.7456983Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7457025Z raise RuntimeError(msg) 2025-12-04T12:05:39.7457407Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 2. CUDA driver allocated memory was 2300575744 and is now 3147825152. 2025-12-04T12:05:39.7457411Z 2025-12-04T12:05:39.7457484Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7457753Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7457754Z 2025-12-04T12:05:39.7457840Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7457842Z 2025-12-04T12:05:39.7457919Z Process 3 exited with error code 10 and exception: 2025-12-04T12:05:39.7457964Z Traceback (most recent call last): 2025-12-04T12:05:39.7458123Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7458166Z getattr(self, test_name)() 2025-12-04T12:05:39.7458322Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7458356Z fn() 2025-12-04T12:05:39.7458505Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7458547Z method(*args, **kwargs) 2025-12-04T12:05:39.7458695Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7458736Z method(*args, **kwargs) 2025-12-04T12:05:39.7458884Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7458924Z with policy(): 2025-12-04T12:05:39.7459073Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7459115Z raise RuntimeError(msg) 2025-12-04T12:05:39.7459497Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 3. CUDA driver allocated memory was 2250244096 and is now 3097493504. 2025-12-04T12:05:39.7459502Z 2025-12-04T12:05:39.7459575Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7459844Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7459847Z 2025-12-04T12:05:39.7459933Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7459997Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:05:39.7460058Z ======================= 1 failed, 3 deselected in 12.25s ======================= 2025-12-04T12:05:39.7460097Z Got exit code 1 2025-12-04T12:05:39.7460137Z Retrying single test... 2025-12-04T12:05:39.7460344Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-8c37bec74ad58026.xml 2025-12-04T12:05:39.7460402Z ============================= test session starts ============================== 2025-12-04T12:05:39.7460515Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:05:39.7460555Z cachedir: .pytest_cache 2025-12-04T12:05:39.7460752Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:05:39.7460798Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:05:39.7460840Z configfile: pytest.ini 2025-12-04T12:05:39.7461000Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:05:39.7461074Z collecting ... collected 10 items / 9 deselected / 1 selected 2025-12-04T12:05:39.7461336Z stepcurrent: skipping 3 already run items. Running only test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7461379Z Running 1 items in this shard 2025-12-04T12:05:39.7461382Z 2025-12-04T12:05:39.7461721Z distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda I1204 12:00:46.524000 419703 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 419772 2025-12-04T12:05:39.7461910Z I1204 12:00:46.525000 419703 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 419773 2025-12-04T12:05:39.7462063Z I1204 12:00:46.525000 419703 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 419774 2025-12-04T12:05:39.7462218Z I1204 12:00:46.526000 419703 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 419775 2025-12-04T12:05:39.7462577Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7462628Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7462977Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7463027Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7463512Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7463574Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7464053Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7464116Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7464494Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7464540Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7465018Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7465080Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7465428Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7465473Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7465953Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7466013Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7466153Z [rank0]:E1204 12:00:56.741000 419772 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7466348Z [rank0]:E1204 12:00:56.741000 419772 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7466642Z [rank0]:E1204 12:00:56.741000 419772 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7466798Z [rank0]:E1204 12:00:56.741000 419772 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7467080Z [rank0]:E1204 12:00:56.741000 419772 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7467204Z [rank0]:E1204 12:00:56.741000 419772 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7467480Z [rank0]:E1204 12:00:56.741000 419772 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7467630Z [rank0]:E1204 12:00:56.741000 419772 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7467906Z [rank0]:E1204 12:00:56.741000 419772 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7468051Z [rank0]:E1204 12:00:56.741000 419772 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7468325Z [rank0]:E1204 12:00:56.741000 419772 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7468461Z [rank0]:E1204 12:00:56.741000 419772 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7468759Z [rank0]:E1204 12:00:56.741000 419772 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7468905Z [rank0]:E1204 12:00:56.741000 419772 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7469412Z [rank0]:E1204 12:00:56.741000 419772 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 0. CUDA driver allocated memory was 2459959296 and is now 3307208704. 2025-12-04T12:05:39.7469528Z [rank0]:E1204 12:00:56.741000 419772 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7469720Z [rank0]:E1204 12:00:56.741000 419772 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7470116Z [rank0]:E1204 12:00:56.741000 419772 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7470227Z [rank0]:E1204 12:00:56.741000 419772 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7470436Z [rank0]:E1204 12:00:56.741000 419772 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7470656Z [rank0]:E1204 12:00:56.741000 419772 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:05:39.7470695Z dist init r=0, world=4 2025-12-04T12:05:39.7470835Z [rank1]:E1204 12:00:56.757000 419773 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7470994Z [rank1]:E1204 12:00:56.757000 419773 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7471283Z [rank1]:E1204 12:00:56.757000 419773 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7471435Z [rank1]:E1204 12:00:56.757000 419773 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7471720Z [rank1]:E1204 12:00:56.757000 419773 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7471841Z [rank1]:E1204 12:00:56.757000 419773 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7472115Z [rank1]:E1204 12:00:56.757000 419773 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7472263Z [rank1]:E1204 12:00:56.757000 419773 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7472534Z [rank1]:E1204 12:00:56.757000 419773 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7472683Z [rank1]:E1204 12:00:56.757000 419773 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7472981Z [rank1]:E1204 12:00:56.757000 419773 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7473119Z [rank1]:E1204 12:00:56.757000 419773 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7473392Z [rank1]:E1204 12:00:56.757000 419773 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7473540Z [rank1]:E1204 12:00:56.757000 419773 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7474046Z [rank1]:E1204 12:00:56.757000 419773 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 1. CUDA driver allocated memory was 2317352960 and is now 3164602368. 2025-12-04T12:05:39.7474161Z [rank1]:E1204 12:00:56.757000 419773 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7474355Z [rank1]:E1204 12:00:56.757000 419773 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7474744Z [rank1]:E1204 12:00:56.757000 419773 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7474884Z [rank1]:E1204 12:00:56.757000 419773 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7475092Z [rank1]:E1204 12:00:56.757000 419773 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7475257Z [rank1]:E1204 12:00:56.757000 419773 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:05:39.7475296Z dist init r=1, world=4 2025-12-04T12:05:39.7475431Z [rank2]:E1204 12:00:56.763000 419774 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7475588Z [rank2]:E1204 12:00:56.763000 419774 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7475874Z [rank2]:E1204 12:00:56.763000 419774 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7476026Z [rank2]:E1204 12:00:56.763000 419774 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7476308Z [rank2]:E1204 12:00:56.763000 419774 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7476432Z [rank2]:E1204 12:00:56.763000 419774 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7476705Z [rank2]:E1204 12:00:56.763000 419774 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7476854Z [rank2]:E1204 12:00:56.763000 419774 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7477125Z [rank2]:E1204 12:00:56.763000 419774 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7477290Z [rank2]:E1204 12:00:56.763000 419774 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7477562Z [rank2]:E1204 12:00:56.763000 419774 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7477697Z [rank2]:E1204 12:00:56.763000 419774 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7477972Z [rank2]:E1204 12:00:56.763000 419774 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7478118Z [rank2]:E1204 12:00:56.763000 419774 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7478653Z [rank2]:E1204 12:00:56.763000 419774 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 2. CUDA driver allocated memory was 2300575744 and is now 3147825152. 2025-12-04T12:05:39.7478767Z [rank2]:E1204 12:00:56.763000 419774 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7478980Z [rank2]:E1204 12:00:56.763000 419774 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7479372Z [rank2]:E1204 12:00:56.763000 419774 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7479483Z [rank2]:E1204 12:00:56.763000 419774 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7479692Z [rank2]:E1204 12:00:56.763000 419774 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7479852Z [rank2]:E1204 12:00:56.763000 419774 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:05:39.7479894Z dist init r=2, world=4 2025-12-04T12:05:39.7480032Z [rank3]:E1204 12:00:56.774000 419775 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7480191Z [rank3]:E1204 12:00:56.774000 419775 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7480478Z [rank3]:E1204 12:00:56.774000 419775 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7480664Z [rank3]:E1204 12:00:56.774000 419775 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7480945Z [rank3]:E1204 12:00:56.774000 419775 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7481068Z [rank3]:E1204 12:00:56.774000 419775 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7481342Z [rank3]:E1204 12:00:56.774000 419775 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7481516Z [rank3]:E1204 12:00:56.774000 419775 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7481789Z [rank3]:E1204 12:00:56.774000 419775 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7481934Z [rank3]:E1204 12:00:56.774000 419775 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7482207Z [rank3]:E1204 12:00:56.774000 419775 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7482346Z [rank3]:E1204 12:00:56.774000 419775 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7482621Z [rank3]:E1204 12:00:56.774000 419775 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7482769Z [rank3]:E1204 12:00:56.774000 419775 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7483272Z [rank3]:E1204 12:00:56.774000 419775 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 3. CUDA driver allocated memory was 2250244096 and is now 3097493504. 2025-12-04T12:05:39.7483419Z [rank3]:E1204 12:00:56.774000 419775 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7483615Z [rank3]:E1204 12:00:56.774000 419775 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7484004Z [rank3]:E1204 12:00:56.774000 419775 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7484115Z [rank3]:E1204 12:00:56.774000 419775 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7484323Z [rank3]:E1204 12:00:56.774000 419775 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7484484Z [rank3]:E1204 12:00:56.774000 419775 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:05:39.7484521Z dist init r=3, world=4 2025-12-04T12:05:39.7484857Z [rank0]:[W1204 12:00:57.872383953 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:05:39.7484898Z FAILED [12.4257s] [100%] 2025-12-04T12:05:39.7484900Z 2025-12-04T12:05:39.7484956Z =================================== FAILURES =================================== 2025-12-04T12:05:39.7485089Z _ TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda _ 2025-12-04T12:05:39.7485136Z Traceback (most recent call last): 2025-12-04T12:05:39.7485297Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:05:39.7485340Z self._join_processes(fn) 2025-12-04T12:05:39.7485512Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:05:39.7485584Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:05:39.7485762Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:05:39.7485805Z raise RuntimeError(error) 2025-12-04T12:05:39.7485886Z RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7485930Z Traceback (most recent call last): 2025-12-04T12:05:39.7486090Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7486135Z getattr(self, test_name)() 2025-12-04T12:05:39.7486292Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7486326Z fn() 2025-12-04T12:05:39.7486476Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7486518Z method(*args, **kwargs) 2025-12-04T12:05:39.7486668Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7486708Z method(*args, **kwargs) 2025-12-04T12:05:39.7486860Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7486898Z with policy(): 2025-12-04T12:05:39.7487047Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7487114Z raise RuntimeError(msg) 2025-12-04T12:05:39.7487495Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 0. CUDA driver allocated memory was 2459959296 and is now 3307208704. 2025-12-04T12:05:39.7487499Z 2025-12-04T12:05:39.7487577Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7487842Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7487845Z 2025-12-04T12:05:39.7487933Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7487937Z 2025-12-04T12:05:39.7487995Z Process 1 exited with error code 10 and exception: 2025-12-04T12:05:39.7488042Z Traceback (most recent call last): 2025-12-04T12:05:39.7488203Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7488246Z getattr(self, test_name)() 2025-12-04T12:05:39.7488404Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7488438Z fn() 2025-12-04T12:05:39.7488588Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7488628Z method(*args, **kwargs) 2025-12-04T12:05:39.7488778Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7488817Z method(*args, **kwargs) 2025-12-04T12:05:39.7488967Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7489005Z with policy(): 2025-12-04T12:05:39.7489156Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7489197Z raise RuntimeError(msg) 2025-12-04T12:05:39.7489601Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 1. CUDA driver allocated memory was 2317352960 and is now 3164602368. 2025-12-04T12:05:39.7489603Z 2025-12-04T12:05:39.7489678Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7489945Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7489949Z 2025-12-04T12:05:39.7490037Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7490039Z 2025-12-04T12:05:39.7490096Z Process 2 exited with error code 10 and exception: 2025-12-04T12:05:39.7490142Z Traceback (most recent call last): 2025-12-04T12:05:39.7490304Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7490348Z getattr(self, test_name)() 2025-12-04T12:05:39.7490503Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7490540Z fn() 2025-12-04T12:05:39.7490727Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7490769Z method(*args, **kwargs) 2025-12-04T12:05:39.7490946Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7490987Z method(*args, **kwargs) 2025-12-04T12:05:39.7491134Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7491172Z with policy(): 2025-12-04T12:05:39.7491323Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7491365Z raise RuntimeError(msg) 2025-12-04T12:05:39.7491746Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 2. CUDA driver allocated memory was 2300575744 and is now 3147825152. 2025-12-04T12:05:39.7491748Z 2025-12-04T12:05:39.7491822Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7492090Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7492093Z 2025-12-04T12:05:39.7492177Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7492179Z 2025-12-04T12:05:39.7492181Z 2025-12-04T12:05:39.7492260Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:05:39.7492346Z Process 0 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:05:39.7492578Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-8c37bec74ad58026.xml - 2025-12-04T12:05:39.7492638Z =========================== short test summary info ============================ 2025-12-04T12:05:39.7492919Z FAILED [12.4257s] distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda - RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7492966Z Traceback (most recent call last): 2025-12-04T12:05:39.7493129Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7493171Z getattr(self, test_name)() 2025-12-04T12:05:39.7493378Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7493414Z fn() 2025-12-04T12:05:39.7493563Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7493604Z method(*args, **kwargs) 2025-12-04T12:05:39.7493752Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7493794Z method(*args, **kwargs) 2025-12-04T12:05:39.7493941Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7493979Z with policy(): 2025-12-04T12:05:39.7494128Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7494172Z raise RuntimeError(msg) 2025-12-04T12:05:39.7494550Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 0. CUDA driver allocated memory was 2459959296 and is now 3307208704. 2025-12-04T12:05:39.7494552Z 2025-12-04T12:05:39.7494626Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7494891Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7494916Z 2025-12-04T12:05:39.7495003Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7495006Z 2025-12-04T12:05:39.7495064Z Process 1 exited with error code 10 and exception: 2025-12-04T12:05:39.7495109Z Traceback (most recent call last): 2025-12-04T12:05:39.7495274Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7495315Z getattr(self, test_name)() 2025-12-04T12:05:39.7495474Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7495508Z fn() 2025-12-04T12:05:39.7495658Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7495700Z method(*args, **kwargs) 2025-12-04T12:05:39.7495851Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7495890Z method(*args, **kwargs) 2025-12-04T12:05:39.7496039Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7496077Z with policy(): 2025-12-04T12:05:39.7496228Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7496268Z raise RuntimeError(msg) 2025-12-04T12:05:39.7496649Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 1. CUDA driver allocated memory was 2317352960 and is now 3164602368. 2025-12-04T12:05:39.7496653Z 2025-12-04T12:05:39.7496726Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7496992Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7496994Z 2025-12-04T12:05:39.7497110Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7497113Z 2025-12-04T12:05:39.7497169Z Process 2 exited with error code 10 and exception: 2025-12-04T12:05:39.7497217Z Traceback (most recent call last): 2025-12-04T12:05:39.7497376Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7497419Z getattr(self, test_name)() 2025-12-04T12:05:39.7497577Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7497614Z fn() 2025-12-04T12:05:39.7497764Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7497805Z method(*args, **kwargs) 2025-12-04T12:05:39.7497952Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7497997Z method(*args, **kwargs) 2025-12-04T12:05:39.7498144Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7498183Z with policy(): 2025-12-04T12:05:39.7498331Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7498373Z raise RuntimeError(msg) 2025-12-04T12:05:39.7498753Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 2. CUDA driver allocated memory was 2300575744 and is now 3147825152. 2025-12-04T12:05:39.7498780Z 2025-12-04T12:05:39.7498854Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7499122Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7499124Z 2025-12-04T12:05:39.7499210Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7499277Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:05:39.7499339Z ======================= 1 failed, 9 deselected in 12.44s ======================= 2025-12-04T12:05:39.7499379Z Got exit code 1 2025-12-04T12:05:39.7499419Z Retrying single test... 2025-12-04T12:05:39.7499606Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-55b6e7f42941fea3.xml 2025-12-04T12:05:39.7499664Z ============================= test session starts ============================== 2025-12-04T12:05:39.7499777Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:05:39.7499820Z cachedir: .pytest_cache 2025-12-04T12:05:39.7499977Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:05:39.7500024Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:05:39.7500066Z configfile: pytest.ini 2025-12-04T12:05:39.7500225Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:05:39.7500299Z collecting ... collected 10 items / 9 deselected / 1 selected 2025-12-04T12:05:39.7500558Z stepcurrent: skipping 3 already run items. Running only test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7500632Z Running 1 items in this shard 2025-12-04T12:05:39.7500634Z 2025-12-04T12:05:39.7501000Z distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda I1204 12:01:01.515000 420105 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 420174 2025-12-04T12:05:39.7501154Z I1204 12:01:01.516000 420105 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 420175 2025-12-04T12:05:39.7501306Z I1204 12:01:01.517000 420105 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 420176 2025-12-04T12:05:39.7501455Z I1204 12:01:01.517000 420105 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 420177 2025-12-04T12:05:39.7501816Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7501864Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7502352Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7502416Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7502768Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7502844Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7503192Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7503239Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7503717Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7503782Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7504259Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7504317Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7504666Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:05:39.7504714Z self.encoder = TransformerEncoder( 2025-12-04T12:05:39.7505197Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7505276Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7505417Z [rank3]:E1204 12:01:11.604000 420177 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7505580Z [rank3]:E1204 12:01:11.604000 420177 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7505867Z [rank3]:E1204 12:01:11.604000 420177 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7506024Z [rank3]:E1204 12:01:11.604000 420177 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7506309Z [rank3]:E1204 12:01:11.604000 420177 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7506434Z [rank3]:E1204 12:01:11.604000 420177 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7506709Z [rank3]:E1204 12:01:11.604000 420177 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7506857Z [rank3]:E1204 12:01:11.604000 420177 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7507150Z [rank3]:E1204 12:01:11.604000 420177 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7507295Z [rank3]:E1204 12:01:11.604000 420177 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7507570Z [rank3]:E1204 12:01:11.604000 420177 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7507704Z [rank3]:E1204 12:01:11.604000 420177 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7507978Z [rank3]:E1204 12:01:11.604000 420177 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7508124Z [rank3]:E1204 12:01:11.604000 420177 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7508653Z [rank3]:E1204 12:01:11.604000 420177 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 3. CUDA driver allocated memory was 2250244096 and is now 3097493504. 2025-12-04T12:05:39.7508768Z [rank3]:E1204 12:01:11.604000 420177 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7508960Z [rank3]:E1204 12:01:11.604000 420177 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7509353Z [rank3]:E1204 12:01:11.604000 420177 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7509465Z [rank3]:E1204 12:01:11.604000 420177 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7509695Z [rank3]:E1204 12:01:11.604000 420177 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7509857Z [rank3]:E1204 12:01:11.604000 420177 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:05:39.7509898Z dist init r=3, world=4 2025-12-04T12:05:39.7510037Z [rank2]:E1204 12:01:11.692000 420176 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7510195Z [rank2]:E1204 12:01:11.692000 420176 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7510482Z [rank2]:E1204 12:01:11.692000 420176 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7510663Z [rank2]:E1204 12:01:11.692000 420176 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7510946Z [rank2]:E1204 12:01:11.692000 420176 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7511067Z [rank2]:E1204 12:01:11.692000 420176 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7511369Z [rank2]:E1204 12:01:11.692000 420176 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7511515Z [rank2]:E1204 12:01:11.692000 420176 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7511790Z [rank2]:E1204 12:01:11.692000 420176 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7511937Z [rank2]:E1204 12:01:11.692000 420176 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7512209Z [rank2]:E1204 12:01:11.692000 420176 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7512350Z [rank2]:E1204 12:01:11.692000 420176 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7512623Z [rank2]:E1204 12:01:11.692000 420176 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7512773Z [rank2]:E1204 12:01:11.692000 420176 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7513279Z [rank2]:E1204 12:01:11.692000 420176 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 2. CUDA driver allocated memory was 2300575744 and is now 3147825152. 2025-12-04T12:05:39.7513394Z [rank2]:E1204 12:01:11.692000 420176 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7513587Z [rank2]:E1204 12:01:11.692000 420176 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7514009Z [rank2]:E1204 12:01:11.692000 420176 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7514122Z [rank2]:E1204 12:01:11.692000 420176 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7514330Z [rank2]:E1204 12:01:11.692000 420176 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7514495Z [rank2]:E1204 12:01:11.692000 420176 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:05:39.7514533Z dist init r=2, world=4 2025-12-04T12:05:39.7514670Z [rank1]:E1204 12:01:11.703000 420175 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7514829Z [rank1]:E1204 12:01:11.703000 420175 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7515114Z [rank1]:E1204 12:01:11.703000 420175 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7515268Z [rank1]:E1204 12:01:11.703000 420175 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7515570Z [rank1]:E1204 12:01:11.703000 420175 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7515694Z [rank1]:E1204 12:01:11.703000 420175 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7515967Z [rank1]:E1204 12:01:11.703000 420175 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7516113Z [rank1]:E1204 12:01:11.703000 420175 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7516387Z [rank1]:E1204 12:01:11.703000 420175 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7516535Z [rank1]:E1204 12:01:11.703000 420175 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7516807Z [rank1]:E1204 12:01:11.703000 420175 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7516943Z [rank1]:E1204 12:01:11.703000 420175 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7517216Z [rank1]:E1204 12:01:11.703000 420175 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7517362Z [rank1]:E1204 12:01:11.703000 420175 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7517872Z [rank1]:E1204 12:01:11.703000 420175 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 1. CUDA driver allocated memory was 2317352960 and is now 3164602368. 2025-12-04T12:05:39.7518007Z [rank1]:E1204 12:01:11.703000 420175 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7518198Z [rank1]:E1204 12:01:11.703000 420175 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7518588Z [rank1]:E1204 12:01:11.703000 420175 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7518701Z [rank1]:E1204 12:01:11.703000 420175 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7518910Z [rank1]:E1204 12:01:11.703000 420175 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7519073Z [rank1]:E1204 12:01:11.703000 420175 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:05:39.7519112Z dist init r=1, world=4 2025-12-04T12:05:39.7519248Z [rank0]:E1204 12:01:11.770000 420174 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7519405Z [rank0]:E1204 12:01:11.770000 420174 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7519690Z [rank0]:E1204 12:01:11.770000 420174 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7519864Z [rank0]:E1204 12:01:11.770000 420174 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7520148Z [rank0]:E1204 12:01:11.770000 420174 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7520269Z [rank0]:E1204 12:01:11.770000 420174 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7520543Z [rank0]:E1204 12:01:11.770000 420174 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7520728Z [rank0]:E1204 12:01:11.770000 420174 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7521004Z [rank0]:E1204 12:01:11.770000 420174 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7521151Z [rank0]:E1204 12:01:11.770000 420174 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7521422Z [rank0]:E1204 12:01:11.770000 420174 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7521558Z [rank0]:E1204 12:01:11.770000 420174 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7521833Z [rank0]:E1204 12:01:11.770000 420174 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7521982Z [rank0]:E1204 12:01:11.770000 420174 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7522511Z [rank0]:E1204 12:01:11.770000 420174 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 0. CUDA driver allocated memory was 2459959296 and is now 3307208704. 2025-12-04T12:05:39.7522626Z [rank0]:E1204 12:01:11.770000 420174 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7522820Z [rank0]:E1204 12:01:11.770000 420174 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7523215Z [rank0]:E1204 12:01:11.770000 420174 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7523331Z [rank0]:E1204 12:01:11.770000 420174 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7523537Z [rank0]:E1204 12:01:11.770000 420174 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7523700Z [rank0]:E1204 12:01:11.770000 420174 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:05:39.7523738Z dist init r=0, world=4 2025-12-04T12:05:39.7524096Z [rank0]:[W1204 12:01:11.808287966 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:05:39.7524138Z FAILED [12.2250s] [100%] 2025-12-04T12:05:39.7524140Z 2025-12-04T12:05:39.7524195Z =================================== FAILURES =================================== 2025-12-04T12:05:39.7524330Z _ TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda _ 2025-12-04T12:05:39.7524375Z Traceback (most recent call last): 2025-12-04T12:05:39.7524538Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:05:39.7524580Z self._join_processes(fn) 2025-12-04T12:05:39.7524752Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:05:39.7524807Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:05:39.7524984Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:05:39.7525028Z raise RuntimeError(error) 2025-12-04T12:05:39.7525108Z RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7525155Z Traceback (most recent call last): 2025-12-04T12:05:39.7525315Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7525357Z getattr(self, test_name)() 2025-12-04T12:05:39.7525516Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7525550Z fn() 2025-12-04T12:05:39.7525702Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7525745Z method(*args, **kwargs) 2025-12-04T12:05:39.7525897Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7525938Z method(*args, **kwargs) 2025-12-04T12:05:39.7526088Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7526144Z with policy(): 2025-12-04T12:05:39.7526297Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7526338Z raise RuntimeError(msg) 2025-12-04T12:05:39.7526719Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 0. CUDA driver allocated memory was 2459959296 and is now 3307208704. 2025-12-04T12:05:39.7526723Z 2025-12-04T12:05:39.7526800Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7527066Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7527068Z 2025-12-04T12:05:39.7527158Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7527160Z 2025-12-04T12:05:39.7527161Z 2025-12-04T12:05:39.7527237Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:05:39.7527324Z Process 0 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:05:39.7527554Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-55b6e7f42941fea3.xml - 2025-12-04T12:05:39.7527640Z =========================== short test summary info ============================ 2025-12-04T12:05:39.7527922Z FAILED [12.2250s] distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda - RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7527967Z Traceback (most recent call last): 2025-12-04T12:05:39.7528131Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7528173Z getattr(self, test_name)() 2025-12-04T12:05:39.7528330Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7528364Z fn() 2025-12-04T12:05:39.7528514Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7528555Z method(*args, **kwargs) 2025-12-04T12:05:39.7528705Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7528744Z method(*args, **kwargs) 2025-12-04T12:05:39.7528892Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7528929Z with policy(): 2025-12-04T12:05:39.7529083Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7529123Z raise RuntimeError(msg) 2025-12-04T12:05:39.7529504Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 19456 on device 0. CUDA driver allocated memory was 2459959296 and is now 3307208704. 2025-12-04T12:05:39.7529508Z 2025-12-04T12:05:39.7529582Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7529847Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7529849Z 2025-12-04T12:05:39.7529935Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7530018Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:05:39.7530080Z ======================= 1 failed, 9 deselected in 12.24s ======================= 2025-12-04T12:05:39.7530117Z Got exit code 1 2025-12-04T12:05:39.7530333Z FAILED CONSISTENTLY: test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7530458Z Test failed consistently, continuing with the rest of the tests due to continue-through-error being set 2025-12-04T12:05:39.7530731Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-ca996c3e5b967c8d.xml 2025-12-04T12:05:39.7530787Z ============================= test session starts ============================== 2025-12-04T12:05:39.7530901Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:05:39.7530943Z cachedir: .pytest_cache 2025-12-04T12:05:39.7531099Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:05:39.7531144Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:05:39.7531186Z configfile: pytest.ini 2025-12-04T12:05:39.7531348Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:05:39.7531420Z collecting ... collected 10 items / 4 deselected / 6 selected 2025-12-04T12:05:39.7531507Z stepcurrent: skipping 4 already run items. 2025-12-04T12:05:39.7531549Z Running 6 items in this shard 2025-12-04T12:05:39.7531551Z 2025-12-04T12:05:39.7531891Z distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda I1204 12:01:16.448000 420507 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 420576 2025-12-04T12:05:39.7532044Z I1204 12:01:16.449000 420507 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 420577 2025-12-04T12:05:39.7532194Z I1204 12:01:16.450000 420507 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 420578 2025-12-04T12:05:39.7532340Z I1204 12:01:16.450000 420507 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 420579 2025-12-04T12:05:39.7532827Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7532891Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7533370Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7533430Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7533907Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7533966Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7534463Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7534521Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7534662Z [rank2]:E1204 12:01:26.340000 420578 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7534822Z [rank2]:E1204 12:01:26.340000 420578 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7535107Z [rank2]:E1204 12:01:26.340000 420578 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7535260Z [rank2]:E1204 12:01:26.340000 420578 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7535541Z [rank2]:E1204 12:01:26.340000 420578 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7535664Z [rank2]:E1204 12:01:26.340000 420578 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7535964Z [rank2]:E1204 12:01:26.340000 420578 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7536112Z [rank2]:E1204 12:01:26.340000 420578 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7536385Z [rank2]:E1204 12:01:26.340000 420578 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7536531Z [rank2]:E1204 12:01:26.340000 420578 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7536800Z [rank2]:E1204 12:01:26.340000 420578 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7536937Z [rank2]:E1204 12:01:26.340000 420578 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7537209Z [rank2]:E1204 12:01:26.340000 420578 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7537357Z [rank2]:E1204 12:01:26.340000 420578 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7537861Z [rank2]:E1204 12:01:26.340000 420578 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 2. CUDA driver allocated memory was 2300575744 and is now 3036676096. 2025-12-04T12:05:39.7537981Z [rank2]:E1204 12:01:26.340000 420578 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7538198Z [rank2]:E1204 12:01:26.340000 420578 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7538614Z [rank2]:E1204 12:01:26.340000 420578 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.7538728Z [rank2]:E1204 12:01:26.340000 420578 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7538936Z [rank2]:E1204 12:01:26.340000 420578 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7543174Z [rank2]:E1204 12:01:26.340000 420578 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:05:39.7543252Z dist init r=2, world=4 2025-12-04T12:05:39.7543394Z [rank0]:E1204 12:01:26.382000 420576 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7543557Z [rank0]:E1204 12:01:26.382000 420576 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7543842Z [rank0]:E1204 12:01:26.382000 420576 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7543995Z [rank0]:E1204 12:01:26.382000 420576 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7544321Z [rank0]:E1204 12:01:26.382000 420576 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7544447Z [rank0]:E1204 12:01:26.382000 420576 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7544724Z [rank0]:E1204 12:01:26.382000 420576 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7544876Z [rank0]:E1204 12:01:26.382000 420576 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7545152Z [rank0]:E1204 12:01:26.382000 420576 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7545300Z [rank0]:E1204 12:01:26.382000 420576 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7545571Z [rank0]:E1204 12:01:26.382000 420576 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7545708Z [rank0]:E1204 12:01:26.382000 420576 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7546015Z [rank0]:E1204 12:01:26.382000 420576 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7546161Z [rank0]:E1204 12:01:26.382000 420576 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7546669Z [rank0]:E1204 12:01:26.382000 420576 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 0. CUDA driver allocated memory was 2459959296 and is now 3196059648. 2025-12-04T12:05:39.7546809Z [rank0]:E1204 12:01:26.382000 420576 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7547002Z [rank0]:E1204 12:01:26.382000 420576 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7547397Z [rank0]:E1204 12:01:26.382000 420576 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.7547511Z [rank0]:E1204 12:01:26.382000 420576 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7547720Z [rank0]:E1204 12:01:26.382000 420576 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7547884Z [rank0]:E1204 12:01:26.382000 420576 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:05:39.7547924Z dist init r=0, world=4 2025-12-04T12:05:39.7548060Z [rank3]:E1204 12:01:26.395000 420579 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7548216Z [rank3]:E1204 12:01:26.395000 420579 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7548499Z [rank3]:E1204 12:01:26.395000 420579 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7548677Z [rank3]:E1204 12:01:26.395000 420579 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7548963Z [rank3]:E1204 12:01:26.395000 420579 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7549084Z [rank3]:E1204 12:01:26.395000 420579 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7549359Z [rank3]:E1204 12:01:26.395000 420579 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7549507Z [rank3]:E1204 12:01:26.395000 420579 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7549780Z [rank3]:E1204 12:01:26.395000 420579 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7549927Z [rank3]:E1204 12:01:26.395000 420579 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7550197Z [rank3]:E1204 12:01:26.395000 420579 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7550335Z [rank3]:E1204 12:01:26.395000 420579 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7550642Z [rank3]:E1204 12:01:26.395000 420579 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7550792Z [rank3]:E1204 12:01:26.395000 420579 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7551323Z [rank3]:E1204 12:01:26.395000 420579 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 3. CUDA driver allocated memory was 2250244096 and is now 2986344448. 2025-12-04T12:05:39.7551436Z [rank3]:E1204 12:01:26.395000 420579 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7551628Z [rank3]:E1204 12:01:26.395000 420579 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7552019Z [rank3]:E1204 12:01:26.395000 420579 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.7552131Z [rank3]:E1204 12:01:26.395000 420579 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7552338Z [rank3]:E1204 12:01:26.395000 420579 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7552500Z [rank3]:E1204 12:01:26.395000 420579 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:05:39.7552537Z dist init r=3, world=4 2025-12-04T12:05:39.7552698Z [rank1]:E1204 12:01:26.405000 420577 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7552858Z [rank1]:E1204 12:01:26.405000 420577 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7553147Z [rank1]:E1204 12:01:26.405000 420577 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7553299Z [rank1]:E1204 12:01:26.405000 420577 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7553578Z [rank1]:E1204 12:01:26.405000 420577 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7553703Z [rank1]:E1204 12:01:26.405000 420577 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7553975Z [rank1]:E1204 12:01:26.405000 420577 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7554121Z [rank1]:E1204 12:01:26.405000 420577 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7554393Z [rank1]:E1204 12:01:26.405000 420577 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7554536Z [rank1]:E1204 12:01:26.405000 420577 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7554808Z [rank1]:E1204 12:01:26.405000 420577 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7554944Z [rank1]:E1204 12:01:26.405000 420577 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7555241Z [rank1]:E1204 12:01:26.405000 420577 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7555386Z [rank1]:E1204 12:01:26.405000 420577 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7555890Z [rank1]:E1204 12:01:26.405000 420577 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 1. CUDA driver allocated memory was 2317352960 and is now 3053453312. 2025-12-04T12:05:39.7556005Z [rank1]:E1204 12:01:26.405000 420577 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7556197Z [rank1]:E1204 12:01:26.405000 420577 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7556587Z [rank1]:E1204 12:01:26.405000 420577 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.7556697Z [rank1]:E1204 12:01:26.405000 420577 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7556905Z [rank1]:E1204 12:01:26.405000 420577 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7557087Z [rank1]:E1204 12:01:26.405000 420577 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:05:39.7557125Z dist init r=1, world=4 2025-12-04T12:05:39.7557460Z [rank0]:[W1204 12:01:26.420906593 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:05:39.7557500Z FAILED [11.9259s] [ 16%] 2025-12-04T12:05:39.7557503Z 2025-12-04T12:05:39.7557561Z =================================== FAILURES =================================== 2025-12-04T12:05:39.7557694Z _ TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda _ 2025-12-04T12:05:39.7557745Z Traceback (most recent call last): 2025-12-04T12:05:39.7557906Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:05:39.7557951Z self._join_processes(fn) 2025-12-04T12:05:39.7558121Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:05:39.7558175Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:05:39.7558384Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:05:39.7558428Z raise RuntimeError(error) 2025-12-04T12:05:39.7558507Z RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7558552Z Traceback (most recent call last): 2025-12-04T12:05:39.7558711Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7558758Z getattr(self, test_name)() 2025-12-04T12:05:39.7558913Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7558949Z fn() 2025-12-04T12:05:39.7559097Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7559139Z method(*args, **kwargs) 2025-12-04T12:05:39.7559307Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7559349Z method(*args, **kwargs) 2025-12-04T12:05:39.7559497Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7559534Z with policy(): 2025-12-04T12:05:39.7559684Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7559726Z raise RuntimeError(msg) 2025-12-04T12:05:39.7560108Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 0. CUDA driver allocated memory was 2459959296 and is now 3196059648. 2025-12-04T12:05:39.7560110Z 2025-12-04T12:05:39.7560188Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7560457Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.7560459Z 2025-12-04T12:05:39.7560547Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7560549Z 2025-12-04T12:05:39.7560551Z 2025-12-04T12:05:39.7560798Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:05:39.7560886Z Process 0 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:05:39.7561118Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-ca996c3e5b967c8d.xml - 2025-12-04T12:05:39.7561178Z =========================== short test summary info ============================ 2025-12-04T12:05:39.7561460Z FAILED [11.9259s] distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda - RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7561506Z Traceback (most recent call last): 2025-12-04T12:05:39.7561667Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7561709Z getattr(self, test_name)() 2025-12-04T12:05:39.7561868Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7561904Z fn() 2025-12-04T12:05:39.7562052Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7562093Z method(*args, **kwargs) 2025-12-04T12:05:39.7562245Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7562287Z method(*args, **kwargs) 2025-12-04T12:05:39.7562434Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7562474Z with policy(): 2025-12-04T12:05:39.7562623Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7562664Z raise RuntimeError(msg) 2025-12-04T12:05:39.7563046Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 0. CUDA driver allocated memory was 2459959296 and is now 3196059648. 2025-12-04T12:05:39.7563050Z 2025-12-04T12:05:39.7563124Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7563428Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.7563430Z 2025-12-04T12:05:39.7563517Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7563580Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:05:39.7563642Z ======================= 1 failed, 4 deselected in 11.94s ======================= 2025-12-04T12:05:39.7563682Z Got exit code 1 2025-12-04T12:05:39.7563722Z Retrying single test... 2025-12-04T12:05:39.7563909Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-48d27f43ba40651d.xml 2025-12-04T12:05:39.7563966Z ============================= test session starts ============================== 2025-12-04T12:05:39.7564082Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:05:39.7564124Z cachedir: .pytest_cache 2025-12-04T12:05:39.7564280Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:05:39.7564326Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:05:39.7564366Z configfile: pytest.ini 2025-12-04T12:05:39.7564528Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:05:39.7564631Z collecting ... collected 10 items / 9 deselected / 1 selected 2025-12-04T12:05:39.7564890Z stepcurrent: skipping 4 already run items. Running only test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.7564932Z Running 1 items in this shard 2025-12-04T12:05:39.7564934Z 2025-12-04T12:05:39.7565275Z distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda I1204 12:01:31.116000 420909 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 420978 2025-12-04T12:05:39.7565428Z I1204 12:01:31.117000 420909 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 420979 2025-12-04T12:05:39.7565581Z I1204 12:01:31.118000 420909 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 420980 2025-12-04T12:05:39.7565729Z I1204 12:01:31.119000 420909 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 420981 2025-12-04T12:05:39.7566220Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7566282Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7566761Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7566824Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7567322Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7567380Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7567854Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7567912Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7568054Z [rank1]:E1204 12:01:41.057000 420979 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7568213Z [rank1]:E1204 12:01:41.057000 420979 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7568502Z [rank1]:E1204 12:01:41.057000 420979 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7568653Z [rank1]:E1204 12:01:41.057000 420979 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7568936Z [rank1]:E1204 12:01:41.057000 420979 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7569090Z [rank1]:E1204 12:01:41.057000 420979 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7569368Z [rank1]:E1204 12:01:41.057000 420979 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7569515Z [rank1]:E1204 12:01:41.057000 420979 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7569787Z [rank1]:E1204 12:01:41.057000 420979 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7569934Z [rank1]:E1204 12:01:41.057000 420979 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7570207Z [rank1]:E1204 12:01:41.057000 420979 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7570342Z [rank1]:E1204 12:01:41.057000 420979 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7570648Z [rank1]:E1204 12:01:41.057000 420979 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7570795Z [rank1]:E1204 12:01:41.057000 420979 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7571301Z [rank1]:E1204 12:01:41.057000 420979 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 1. CUDA driver allocated memory was 2317352960 and is now 3053453312. 2025-12-04T12:05:39.7571415Z [rank1]:E1204 12:01:41.057000 420979 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7571641Z [rank1]:E1204 12:01:41.057000 420979 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7572035Z [rank1]:E1204 12:01:41.057000 420979 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.7572149Z [rank1]:E1204 12:01:41.057000 420979 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7572357Z [rank1]:E1204 12:01:41.057000 420979 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7572520Z [rank1]:E1204 12:01:41.057000 420979 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:05:39.7572561Z dist init r=1, world=4 2025-12-04T12:05:39.7572696Z [rank3]:E1204 12:01:41.065000 420981 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7572853Z [rank3]:E1204 12:01:41.065000 420981 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7573135Z [rank3]:E1204 12:01:41.065000 420981 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7573313Z [rank3]:E1204 12:01:41.065000 420981 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7573594Z [rank3]:E1204 12:01:41.065000 420981 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7573718Z [rank3]:E1204 12:01:41.065000 420981 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7573992Z [rank3]:E1204 12:01:41.065000 420981 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7574140Z [rank3]:E1204 12:01:41.065000 420981 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7574415Z [rank3]:E1204 12:01:41.065000 420981 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7574559Z [rank3]:E1204 12:01:41.065000 420981 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7574832Z [rank3]:E1204 12:01:41.065000 420981 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7574965Z [rank3]:E1204 12:01:41.065000 420981 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7575239Z [rank3]:E1204 12:01:41.065000 420981 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7575386Z [rank3]:E1204 12:01:41.065000 420981 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7575908Z [rank3]:E1204 12:01:41.065000 420981 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 3. CUDA driver allocated memory was 2250244096 and is now 2986344448. 2025-12-04T12:05:39.7576023Z [rank3]:E1204 12:01:41.065000 420981 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7576217Z [rank3]:E1204 12:01:41.065000 420981 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7576612Z [rank3]:E1204 12:01:41.065000 420981 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.7576724Z [rank3]:E1204 12:01:41.065000 420981 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7576937Z [rank3]:E1204 12:01:41.065000 420981 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7577099Z [rank3]:E1204 12:01:41.065000 420981 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:05:39.7577136Z dist init r=3, world=4 2025-12-04T12:05:39.7577272Z [rank2]:E1204 12:01:41.078000 420980 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7577448Z [rank2]:E1204 12:01:41.078000 420980 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7577731Z [rank2]:E1204 12:01:41.078000 420980 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7577884Z [rank2]:E1204 12:01:41.078000 420980 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7578167Z [rank2]:E1204 12:01:41.078000 420980 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7578289Z [rank2]:E1204 12:01:41.078000 420980 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7578567Z [rank2]:E1204 12:01:41.078000 420980 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7578714Z [rank2]:E1204 12:01:41.078000 420980 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7578986Z [rank2]:E1204 12:01:41.078000 420980 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7579133Z [rank2]:E1204 12:01:41.078000 420980 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7579403Z [rank2]:E1204 12:01:41.078000 420980 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7579540Z [rank2]:E1204 12:01:41.078000 420980 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7579813Z [rank2]:E1204 12:01:41.078000 420980 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7579987Z [rank2]:E1204 12:01:41.078000 420980 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7580490Z [rank2]:E1204 12:01:41.078000 420980 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 2. CUDA driver allocated memory was 2300575744 and is now 3036676096. 2025-12-04T12:05:39.7580636Z [rank2]:E1204 12:01:41.078000 420980 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7580831Z [rank2]:E1204 12:01:41.078000 420980 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7581223Z [rank2]:E1204 12:01:41.078000 420980 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.7581335Z [rank2]:E1204 12:01:41.078000 420980 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7581542Z [rank2]:E1204 12:01:41.078000 420980 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7581734Z [rank2]:E1204 12:01:41.078000 420980 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:05:39.7581774Z dist init r=2, world=4 2025-12-04T12:05:39.7581908Z [rank0]:E1204 12:01:41.116000 420978 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7582066Z [rank0]:E1204 12:01:41.116000 420978 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7582348Z [rank0]:E1204 12:01:41.116000 420978 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7582501Z [rank0]:E1204 12:01:41.116000 420978 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7582783Z [rank0]:E1204 12:01:41.116000 420978 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7582907Z [rank0]:E1204 12:01:41.116000 420978 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7583182Z [rank0]:E1204 12:01:41.116000 420978 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7583328Z [rank0]:E1204 12:01:41.116000 420978 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7583601Z [rank0]:E1204 12:01:41.116000 420978 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7583747Z [rank0]:E1204 12:01:41.116000 420978 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7584019Z [rank0]:E1204 12:01:41.116000 420978 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7584176Z [rank0]:E1204 12:01:41.116000 420978 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7584450Z [rank0]:E1204 12:01:41.116000 420978 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7584595Z [rank0]:E1204 12:01:41.116000 420978 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7585098Z [rank0]:E1204 12:01:41.116000 420978 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 0. CUDA driver allocated memory was 2459959296 and is now 3196059648. 2025-12-04T12:05:39.7585212Z [rank0]:E1204 12:01:41.116000 420978 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7585403Z [rank0]:E1204 12:01:41.116000 420978 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7585795Z [rank0]:E1204 12:01:41.116000 420978 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.7585928Z [rank0]:E1204 12:01:41.116000 420978 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7586135Z [rank0]:E1204 12:01:41.116000 420978 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7586297Z [rank0]:E1204 12:01:41.116000 420978 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:05:39.7586336Z dist init r=0, world=4 2025-12-04T12:05:39.7586668Z [rank0]:[W1204 12:01:41.175566459 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:05:39.7586707Z FAILED [12.0257s] [100%] 2025-12-04T12:05:39.7586711Z 2025-12-04T12:05:39.7586767Z =================================== FAILURES =================================== 2025-12-04T12:05:39.7586897Z _ TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda _ 2025-12-04T12:05:39.7586944Z Traceback (most recent call last): 2025-12-04T12:05:39.7587105Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:05:39.7587151Z self._join_processes(fn) 2025-12-04T12:05:39.7587325Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:05:39.7587381Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:05:39.7587556Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:05:39.7587600Z raise RuntimeError(error) 2025-12-04T12:05:39.7587680Z RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7587727Z Traceback (most recent call last): 2025-12-04T12:05:39.7587886Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7587929Z getattr(self, test_name)() 2025-12-04T12:05:39.7588113Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7588151Z fn() 2025-12-04T12:05:39.7588299Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7588341Z method(*args, **kwargs) 2025-12-04T12:05:39.7588488Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7588529Z method(*args, **kwargs) 2025-12-04T12:05:39.7588677Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7588716Z with policy(): 2025-12-04T12:05:39.7588867Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7588906Z raise RuntimeError(msg) 2025-12-04T12:05:39.7589289Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 0. CUDA driver allocated memory was 2459959296 and is now 3196059648. 2025-12-04T12:05:39.7589292Z 2025-12-04T12:05:39.7589367Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7589639Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.7589662Z 2025-12-04T12:05:39.7589749Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7589751Z 2025-12-04T12:05:39.7589754Z 2025-12-04T12:05:39.7589831Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:05:39.7589918Z Process 0 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:05:39.7590148Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-48d27f43ba40651d.xml - 2025-12-04T12:05:39.7590208Z =========================== short test summary info ============================ 2025-12-04T12:05:39.7590485Z FAILED [12.0257s] distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda - RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7590534Z Traceback (most recent call last): 2025-12-04T12:05:39.7590727Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7590771Z getattr(self, test_name)() 2025-12-04T12:05:39.7590930Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7590964Z fn() 2025-12-04T12:05:39.7591115Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7591155Z method(*args, **kwargs) 2025-12-04T12:05:39.7591304Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7591343Z method(*args, **kwargs) 2025-12-04T12:05:39.7591490Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7591529Z with policy(): 2025-12-04T12:05:39.7591679Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7591719Z raise RuntimeError(msg) 2025-12-04T12:05:39.7592129Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 0. CUDA driver allocated memory was 2459959296 and is now 3196059648. 2025-12-04T12:05:39.7592132Z 2025-12-04T12:05:39.7592206Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7592476Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.7592478Z 2025-12-04T12:05:39.7592567Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7592629Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:05:39.7592691Z ======================= 1 failed, 9 deselected in 12.04s ======================= 2025-12-04T12:05:39.7592727Z Got exit code 1 2025-12-04T12:05:39.7592768Z Retrying single test... 2025-12-04T12:05:39.7592956Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-fbc5bcdc09174435.xml 2025-12-04T12:05:39.7593015Z ============================= test session starts ============================== 2025-12-04T12:05:39.7593127Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:05:39.7593169Z cachedir: .pytest_cache 2025-12-04T12:05:39.7593323Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:05:39.7593407Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:05:39.7593446Z configfile: pytest.ini 2025-12-04T12:05:39.7593606Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:05:39.7593678Z collecting ... collected 10 items / 9 deselected / 1 selected 2025-12-04T12:05:39.7593940Z stepcurrent: skipping 4 already run items. Running only test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.7593982Z Running 1 items in this shard 2025-12-04T12:05:39.7593984Z 2025-12-04T12:05:39.7594323Z distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda I1204 12:01:45.605000 421311 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 421380 2025-12-04T12:05:39.7594476Z I1204 12:01:45.606000 421311 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 421381 2025-12-04T12:05:39.7594624Z I1204 12:01:45.607000 421311 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 421382 2025-12-04T12:05:39.7594772Z I1204 12:01:45.608000 421311 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 421383 2025-12-04T12:05:39.7595258Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7595321Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7595796Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7595859Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7596358Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7596415Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7596891Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7596949Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7597092Z [rank1]:E1204 12:01:55.528000 421381 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7597251Z [rank1]:E1204 12:01:55.528000 421381 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7597536Z [rank1]:E1204 12:01:55.528000 421381 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7597715Z [rank1]:E1204 12:01:55.528000 421381 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7597995Z [rank1]:E1204 12:01:55.528000 421381 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7598120Z [rank1]:E1204 12:01:55.528000 421381 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7598425Z [rank1]:E1204 12:01:55.528000 421381 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7598573Z [rank1]:E1204 12:01:55.528000 421381 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7598847Z [rank1]:E1204 12:01:55.528000 421381 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7598996Z [rank1]:E1204 12:01:55.528000 421381 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7599270Z [rank1]:E1204 12:01:55.528000 421381 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7599404Z [rank1]:E1204 12:01:55.528000 421381 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7599677Z [rank1]:E1204 12:01:55.528000 421381 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7599824Z [rank1]:E1204 12:01:55.528000 421381 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7600347Z [rank1]:E1204 12:01:55.528000 421381 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 1. CUDA driver allocated memory was 2317352960 and is now 3053453312. 2025-12-04T12:05:39.7600462Z [rank1]:E1204 12:01:55.528000 421381 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7600692Z [rank1]:E1204 12:01:55.528000 421381 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7601086Z [rank1]:E1204 12:01:55.528000 421381 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.7601202Z [rank1]:E1204 12:01:55.528000 421381 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7601412Z [rank1]:E1204 12:01:55.528000 421381 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7601572Z [rank1]:E1204 12:01:55.528000 421381 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:05:39.7601613Z dist init r=1, world=4 2025-12-04T12:05:39.7601748Z [rank3]:E1204 12:01:55.587000 421383 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7601906Z [rank3]:E1204 12:01:55.587000 421383 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7602216Z [rank3]:E1204 12:01:55.587000 421383 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7602369Z [rank3]:E1204 12:01:55.587000 421383 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7602649Z [rank3]:E1204 12:01:55.587000 421383 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7602773Z [rank3]:E1204 12:01:55.587000 421383 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7603048Z [rank3]:E1204 12:01:55.587000 421383 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7603194Z [rank3]:E1204 12:01:55.587000 421383 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7603469Z [rank3]:E1204 12:01:55.587000 421383 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7603613Z [rank3]:E1204 12:01:55.587000 421383 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7603886Z [rank3]:E1204 12:01:55.587000 421383 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7604021Z [rank3]:E1204 12:01:55.587000 421383 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7604295Z [rank3]:E1204 12:01:55.587000 421383 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7604467Z [rank3]:E1204 12:01:55.587000 421383 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7604968Z [rank3]:E1204 12:01:55.587000 421383 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 3. CUDA driver allocated memory was 2250244096 and is now 2986344448. 2025-12-04T12:05:39.7605081Z [rank3]:E1204 12:01:55.587000 421383 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7605275Z [rank3]:E1204 12:01:55.587000 421383 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7605669Z [rank3]:E1204 12:01:55.587000 421383 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.7605782Z [rank3]:E1204 12:01:55.587000 421383 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7605992Z [rank3]:E1204 12:01:55.587000 421383 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7606154Z [rank3]:E1204 12:01:55.587000 421383 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:05:39.7606212Z dist init r=3, world=4 2025-12-04T12:05:39.7606347Z [rank0]:E1204 12:01:55.607000 421380 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7606503Z [rank0]:E1204 12:01:55.607000 421380 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7606787Z [rank0]:E1204 12:01:55.607000 421380 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7606937Z [rank0]:E1204 12:01:55.607000 421380 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7607217Z [rank0]:E1204 12:01:55.607000 421380 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7607340Z [rank0]:E1204 12:01:55.607000 421380 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7607614Z [rank0]:E1204 12:01:55.607000 421380 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7607759Z [rank0]:E1204 12:01:55.607000 421380 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7608034Z [rank0]:E1204 12:01:55.607000 421380 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7608178Z [rank0]:E1204 12:01:55.607000 421380 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7608450Z [rank0]:E1204 12:01:55.607000 421380 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7608584Z [rank0]:E1204 12:01:55.607000 421380 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7608875Z [rank0]:E1204 12:01:55.607000 421380 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7609023Z [rank0]:E1204 12:01:55.607000 421380 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7609524Z [rank0]:E1204 12:01:55.607000 421380 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 0. CUDA driver allocated memory was 2459959296 and is now 3196059648. 2025-12-04T12:05:39.7609638Z [rank0]:E1204 12:01:55.607000 421380 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7609833Z [rank0]:E1204 12:01:55.607000 421380 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7610223Z [rank0]:E1204 12:01:55.607000 421380 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.7610362Z [rank0]:E1204 12:01:55.607000 421380 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7610569Z [rank0]:E1204 12:01:55.607000 421380 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7610768Z [rank0]:E1204 12:01:55.607000 421380 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:05:39.7610808Z dist init r=0, world=4 2025-12-04T12:05:39.7610943Z [rank2]:E1204 12:01:55.650000 421382 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7611101Z [rank2]:E1204 12:01:55.650000 421382 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7611382Z [rank2]:E1204 12:01:55.650000 421382 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7611535Z [rank2]:E1204 12:01:55.650000 421382 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7611816Z [rank2]:E1204 12:01:55.650000 421382 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7611938Z [rank2]:E1204 12:01:55.650000 421382 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7612209Z [rank2]:E1204 12:01:55.650000 421382 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7612354Z [rank2]:E1204 12:01:55.650000 421382 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7612630Z [rank2]:E1204 12:01:55.650000 421382 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7612774Z [rank2]:E1204 12:01:55.650000 421382 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7613074Z [rank2]:E1204 12:01:55.650000 421382 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7613208Z [rank2]:E1204 12:01:55.650000 421382 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7613482Z [rank2]:E1204 12:01:55.650000 421382 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7613629Z [rank2]:E1204 12:01:55.650000 421382 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7614133Z [rank2]:E1204 12:01:55.650000 421382 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 2. CUDA driver allocated memory was 2300575744 and is now 3036676096. 2025-12-04T12:05:39.7614245Z [rank2]:E1204 12:01:55.650000 421382 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7614436Z [rank2]:E1204 12:01:55.650000 421382 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7614851Z [rank2]:E1204 12:01:55.650000 421382 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.7614962Z [rank2]:E1204 12:01:55.650000 421382 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7615172Z [rank2]:E1204 12:01:55.650000 421382 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7615334Z [rank2]:E1204 12:01:55.650000 421382 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:05:39.7615371Z dist init r=2, world=4 2025-12-04T12:05:39.7615702Z [rank0]:[W1204 12:01:55.729829908 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:05:39.7615743Z FAILED [12.0263s] [100%] 2025-12-04T12:05:39.7615745Z 2025-12-04T12:05:39.7615800Z =================================== FAILURES =================================== 2025-12-04T12:05:39.7615932Z _ TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda _ 2025-12-04T12:05:39.7615978Z Traceback (most recent call last): 2025-12-04T12:05:39.7616138Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:05:39.7616183Z self._join_processes(fn) 2025-12-04T12:05:39.7616353Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:05:39.7616407Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:05:39.7616585Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:05:39.7616628Z raise RuntimeError(error) 2025-12-04T12:05:39.7616707Z RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7616752Z Traceback (most recent call last): 2025-12-04T12:05:39.7616930Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7616973Z getattr(self, test_name)() 2025-12-04T12:05:39.7617128Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7617162Z fn() 2025-12-04T12:05:39.7617311Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7617351Z method(*args, **kwargs) 2025-12-04T12:05:39.7617502Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7617541Z method(*args, **kwargs) 2025-12-04T12:05:39.7617688Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7617725Z with policy(): 2025-12-04T12:05:39.7617875Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7617915Z raise RuntimeError(msg) 2025-12-04T12:05:39.7618294Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 0. CUDA driver allocated memory was 2459959296 and is now 3196059648. 2025-12-04T12:05:39.7618297Z 2025-12-04T12:05:39.7618395Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7618664Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.7618667Z 2025-12-04T12:05:39.7618753Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7618757Z 2025-12-04T12:05:39.7618816Z Process 1 exited with error code 10 and exception: 2025-12-04T12:05:39.7618861Z Traceback (most recent call last): 2025-12-04T12:05:39.7619024Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7619065Z getattr(self, test_name)() 2025-12-04T12:05:39.7619222Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7619258Z fn() 2025-12-04T12:05:39.7619406Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7619446Z method(*args, **kwargs) 2025-12-04T12:05:39.7619593Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7619633Z method(*args, **kwargs) 2025-12-04T12:05:39.7619780Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7619817Z with policy(): 2025-12-04T12:05:39.7619965Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7620008Z raise RuntimeError(msg) 2025-12-04T12:05:39.7620384Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 1. CUDA driver allocated memory was 2317352960 and is now 3053453312. 2025-12-04T12:05:39.7620388Z 2025-12-04T12:05:39.7620465Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7620794Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.7620798Z 2025-12-04T12:05:39.7620883Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7620885Z 2025-12-04T12:05:39.7620887Z 2025-12-04T12:05:39.7620963Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:05:39.7621048Z Process 0 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:05:39.7621279Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-fbc5bcdc09174435.xml - 2025-12-04T12:05:39.7621340Z =========================== short test summary info ============================ 2025-12-04T12:05:39.7621620Z FAILED [12.0263s] distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda - RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7621666Z Traceback (most recent call last): 2025-12-04T12:05:39.7621830Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7621871Z getattr(self, test_name)() 2025-12-04T12:05:39.7622028Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7622061Z fn() 2025-12-04T12:05:39.7622211Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7622278Z method(*args, **kwargs) 2025-12-04T12:05:39.7622428Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7622468Z method(*args, **kwargs) 2025-12-04T12:05:39.7622614Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7622653Z with policy(): 2025-12-04T12:05:39.7622801Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7622842Z raise RuntimeError(msg) 2025-12-04T12:05:39.7623218Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 0. CUDA driver allocated memory was 2459959296 and is now 3196059648. 2025-12-04T12:05:39.7623222Z 2025-12-04T12:05:39.7623297Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7623561Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.7623563Z 2025-12-04T12:05:39.7623650Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7623652Z 2025-12-04T12:05:39.7623710Z Process 1 exited with error code 10 and exception: 2025-12-04T12:05:39.7623755Z Traceback (most recent call last): 2025-12-04T12:05:39.7623917Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7623958Z getattr(self, test_name)() 2025-12-04T12:05:39.7624118Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7624153Z fn() 2025-12-04T12:05:39.7624302Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7624341Z method(*args, **kwargs) 2025-12-04T12:05:39.7624515Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7624554Z method(*args, **kwargs) 2025-12-04T12:05:39.7624701Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7624737Z with policy(): 2025-12-04T12:05:39.7624886Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7624925Z raise RuntimeError(msg) 2025-12-04T12:05:39.7625301Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 1. CUDA driver allocated memory was 2317352960 and is now 3053453312. 2025-12-04T12:05:39.7625305Z 2025-12-04T12:05:39.7625379Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7625644Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.7625646Z 2025-12-04T12:05:39.7625731Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7625829Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:05:39.7625891Z ======================= 1 failed, 9 deselected in 12.04s ======================= 2025-12-04T12:05:39.7625950Z Got exit code 1 2025-12-04T12:05:39.7626166Z FAILED CONSISTENTLY: test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda 2025-12-04T12:05:39.7626292Z Test failed consistently, continuing with the rest of the tests due to continue-through-error being set 2025-12-04T12:05:39.7626480Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-e1c5a83b2f435bf3.xml 2025-12-04T12:05:39.7626537Z ============================= test session starts ============================== 2025-12-04T12:05:39.7626650Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:05:39.7626690Z cachedir: .pytest_cache 2025-12-04T12:05:39.7626845Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:05:39.7626892Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:05:39.7626934Z configfile: pytest.ini 2025-12-04T12:05:39.7627092Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:05:39.7627164Z collecting ... collected 10 items / 5 deselected / 5 selected 2025-12-04T12:05:39.7627217Z stepcurrent: skipping 5 already run items. 2025-12-04T12:05:39.7627261Z Running 5 items in this shard 2025-12-04T12:05:39.7627264Z 2025-12-04T12:05:39.7627606Z distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda I1204 12:02:00.470000 421713 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 421782 2025-12-04T12:05:39.7627759Z I1204 12:02:00.470000 421713 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 421783 2025-12-04T12:05:39.7627910Z I1204 12:02:00.471000 421713 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 421784 2025-12-04T12:05:39.7628059Z I1204 12:02:00.472000 421713 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 421785 2025-12-04T12:05:39.7628568Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7628629Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7629107Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7629168Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7629644Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7629703Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7630176Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7630253Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7630393Z [rank3]:E1204 12:02:10.392000 421785 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7630555Z [rank3]:E1204 12:02:10.392000 421785 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7630882Z [rank3]:E1204 12:02:10.392000 421785 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7631038Z [rank3]:E1204 12:02:10.392000 421785 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7631323Z [rank3]:E1204 12:02:10.392000 421785 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7631448Z [rank3]:E1204 12:02:10.392000 421785 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7631725Z [rank3]:E1204 12:02:10.392000 421785 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7631872Z [rank3]:E1204 12:02:10.392000 421785 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7632146Z [rank3]:E1204 12:02:10.392000 421785 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7632293Z [rank3]:E1204 12:02:10.392000 421785 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7632565Z [rank3]:E1204 12:02:10.392000 421785 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7632735Z [rank3]:E1204 12:02:10.392000 421785 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7633008Z [rank3]:E1204 12:02:10.392000 421785 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7633156Z [rank3]:E1204 12:02:10.392000 421785 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7633661Z [rank3]:E1204 12:02:10.392000 421785 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 3. CUDA driver allocated memory was 2250244096 and is now 2986344448. 2025-12-04T12:05:39.7633779Z [rank3]:E1204 12:02:10.392000 421785 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7633972Z [rank3]:E1204 12:02:10.392000 421785 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7634360Z [rank3]:E1204 12:02:10.392000 421785 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7634503Z [rank3]:E1204 12:02:10.392000 421785 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7634710Z [rank3]:E1204 12:02:10.392000 421785 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7634874Z [rank3]:E1204 12:02:10.392000 421785 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:05:39.7634912Z dist init r=3, world=4 2025-12-04T12:05:39.7635048Z [rank1]:E1204 12:02:10.397000 421783 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7635204Z [rank1]:E1204 12:02:10.397000 421783 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7635490Z [rank1]:E1204 12:02:10.397000 421783 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7635645Z [rank1]:E1204 12:02:10.397000 421783 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7635926Z [rank1]:E1204 12:02:10.397000 421783 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7636049Z [rank1]:E1204 12:02:10.397000 421783 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7636321Z [rank1]:E1204 12:02:10.397000 421783 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7636467Z [rank1]:E1204 12:02:10.397000 421783 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7636740Z [rank1]:E1204 12:02:10.397000 421783 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7636907Z [rank1]:E1204 12:02:10.397000 421783 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7637178Z [rank1]:E1204 12:02:10.397000 421783 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7637312Z [rank1]:E1204 12:02:10.397000 421783 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7637586Z [rank1]:E1204 12:02:10.397000 421783 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7637733Z [rank1]:E1204 12:02:10.397000 421783 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7638242Z [rank1]:E1204 12:02:10.397000 421783 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 1. CUDA driver allocated memory was 2317352960 and is now 3053453312. 2025-12-04T12:05:39.7638355Z [rank1]:E1204 12:02:10.397000 421783 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7638546Z [rank1]:E1204 12:02:10.397000 421783 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7638955Z [rank1]:E1204 12:02:10.397000 421783 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7639067Z [rank1]:E1204 12:02:10.397000 421783 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7639276Z [rank1]:E1204 12:02:10.397000 421783 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7639436Z [rank1]:E1204 12:02:10.397000 421783 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:05:39.7639474Z dist init r=1, world=4 2025-12-04T12:05:39.7639609Z [rank2]:E1204 12:02:10.488000 421784 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7639768Z [rank2]:E1204 12:02:10.488000 421784 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7640053Z [rank2]:E1204 12:02:10.488000 421784 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7640204Z [rank2]:E1204 12:02:10.488000 421784 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7640485Z [rank2]:E1204 12:02:10.488000 421784 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7640639Z [rank2]:E1204 12:02:10.488000 421784 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7640913Z [rank2]:E1204 12:02:10.488000 421784 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7641058Z [rank2]:E1204 12:02:10.488000 421784 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7641371Z [rank2]:E1204 12:02:10.488000 421784 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7641515Z [rank2]:E1204 12:02:10.488000 421784 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7641787Z [rank2]:E1204 12:02:10.488000 421784 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7641923Z [rank2]:E1204 12:02:10.488000 421784 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7642197Z [rank2]:E1204 12:02:10.488000 421784 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7642344Z [rank2]:E1204 12:02:10.488000 421784 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7642847Z [rank2]:E1204 12:02:10.488000 421784 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 2. CUDA driver allocated memory was 2300575744 and is now 3036676096. 2025-12-04T12:05:39.7642988Z [rank2]:E1204 12:02:10.488000 421784 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7643180Z [rank2]:E1204 12:02:10.488000 421784 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7643569Z [rank2]:E1204 12:02:10.488000 421784 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7643682Z [rank2]:E1204 12:02:10.488000 421784 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7643888Z [rank2]:E1204 12:02:10.488000 421784 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7644051Z [rank2]:E1204 12:02:10.488000 421784 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:05:39.7644089Z dist init r=2, world=4 2025-12-04T12:05:39.7644225Z [rank0]:E1204 12:02:10.498000 421782 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7644383Z [rank0]:E1204 12:02:10.498000 421782 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7644669Z [rank0]:E1204 12:02:10.498000 421782 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7644822Z [rank0]:E1204 12:02:10.498000 421782 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7645103Z [rank0]:E1204 12:02:10.498000 421782 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7645226Z [rank0]:E1204 12:02:10.498000 421782 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7645518Z [rank0]:E1204 12:02:10.498000 421782 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7645664Z [rank0]:E1204 12:02:10.498000 421782 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7645934Z [rank0]:E1204 12:02:10.498000 421782 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7646081Z [rank0]:E1204 12:02:10.498000 421782 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7646353Z [rank0]:E1204 12:02:10.498000 421782 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7646486Z [rank0]:E1204 12:02:10.498000 421782 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7646760Z [rank0]:E1204 12:02:10.498000 421782 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7646904Z [rank0]:E1204 12:02:10.498000 421782 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7647428Z [rank0]:E1204 12:02:10.498000 421782 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 0. CUDA driver allocated memory was 2459959296 and is now 3196059648. 2025-12-04T12:05:39.7647543Z [rank0]:E1204 12:02:10.498000 421782 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7647734Z [rank0]:E1204 12:02:10.498000 421782 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7648123Z [rank0]:E1204 12:02:10.498000 421782 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7648235Z [rank0]:E1204 12:02:10.498000 421782 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7648442Z [rank0]:E1204 12:02:10.498000 421782 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7648605Z [rank0]:E1204 12:02:10.498000 421782 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:05:39.7648643Z dist init r=0, world=4 2025-12-04T12:05:39.7648973Z [rank0]:[W1204 12:02:10.755613039 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:05:39.7649016Z FAILED [11.9247s] [ 20%] 2025-12-04T12:05:39.7649018Z 2025-12-04T12:05:39.7649073Z =================================== FAILURES =================================== 2025-12-04T12:05:39.7649207Z _ TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda _ 2025-12-04T12:05:39.7649252Z Traceback (most recent call last): 2025-12-04T12:05:39.7649434Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:05:39.7649478Z self._join_processes(fn) 2025-12-04T12:05:39.7649648Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:05:39.7649701Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:05:39.7649875Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:05:39.7649921Z raise RuntimeError(error) 2025-12-04T12:05:39.7649999Z RuntimeError: Process 1 exited with error code 10 and exception: 2025-12-04T12:05:39.7650044Z Traceback (most recent call last): 2025-12-04T12:05:39.7650201Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7650243Z getattr(self, test_name)() 2025-12-04T12:05:39.7650399Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7650434Z fn() 2025-12-04T12:05:39.7650580Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7650664Z method(*args, **kwargs) 2025-12-04T12:05:39.7650811Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7650852Z method(*args, **kwargs) 2025-12-04T12:05:39.7651031Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7651069Z with policy(): 2025-12-04T12:05:39.7651218Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7651259Z raise RuntimeError(msg) 2025-12-04T12:05:39.7651643Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 1. CUDA driver allocated memory was 2317352960 and is now 3053453312. 2025-12-04T12:05:39.7651647Z 2025-12-04T12:05:39.7651721Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7651988Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7651992Z 2025-12-04T12:05:39.7652078Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7652080Z 2025-12-04T12:05:39.7652139Z Process 3 exited with error code 10 and exception: 2025-12-04T12:05:39.7652182Z Traceback (most recent call last): 2025-12-04T12:05:39.7652345Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7652386Z getattr(self, test_name)() 2025-12-04T12:05:39.7652542Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7652575Z fn() 2025-12-04T12:05:39.7652723Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7652762Z method(*args, **kwargs) 2025-12-04T12:05:39.7652913Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7652952Z method(*args, **kwargs) 2025-12-04T12:05:39.7653099Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7653135Z with policy(): 2025-12-04T12:05:39.7653313Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7653354Z raise RuntimeError(msg) 2025-12-04T12:05:39.7653729Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 3. CUDA driver allocated memory was 2250244096 and is now 2986344448. 2025-12-04T12:05:39.7653731Z 2025-12-04T12:05:39.7653806Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7654074Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7654076Z 2025-12-04T12:05:39.7654162Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7654164Z 2025-12-04T12:05:39.7654167Z 2025-12-04T12:05:39.7654242Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:05:39.7654328Z Process 1 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:05:39.7654558Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-e1c5a83b2f435bf3.xml - 2025-12-04T12:05:39.7654617Z =========================== short test summary info ============================ 2025-12-04T12:05:39.7654924Z FAILED [11.9247s] distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda - RuntimeError: Process 1 exited with error code 10 and exception: 2025-12-04T12:05:39.7654969Z Traceback (most recent call last): 2025-12-04T12:05:39.7655131Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7655174Z getattr(self, test_name)() 2025-12-04T12:05:39.7655331Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7655364Z fn() 2025-12-04T12:05:39.7655514Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7655554Z method(*args, **kwargs) 2025-12-04T12:05:39.7655703Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7655743Z method(*args, **kwargs) 2025-12-04T12:05:39.7655890Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7655926Z with policy(): 2025-12-04T12:05:39.7656075Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7656116Z raise RuntimeError(msg) 2025-12-04T12:05:39.7656497Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 1. CUDA driver allocated memory was 2317352960 and is now 3053453312. 2025-12-04T12:05:39.7656499Z 2025-12-04T12:05:39.7656573Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7656837Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7656839Z 2025-12-04T12:05:39.7656925Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7656927Z 2025-12-04T12:05:39.7656984Z Process 3 exited with error code 10 and exception: 2025-12-04T12:05:39.7657047Z Traceback (most recent call last): 2025-12-04T12:05:39.7657207Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7657248Z getattr(self, test_name)() 2025-12-04T12:05:39.7657404Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7657438Z fn() 2025-12-04T12:05:39.7657586Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7657628Z method(*args, **kwargs) 2025-12-04T12:05:39.7657775Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7657816Z method(*args, **kwargs) 2025-12-04T12:05:39.7657964Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7658002Z with policy(): 2025-12-04T12:05:39.7658149Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7658190Z raise RuntimeError(msg) 2025-12-04T12:05:39.7658569Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 3. CUDA driver allocated memory was 2250244096 and is now 2986344448. 2025-12-04T12:05:39.7658594Z 2025-12-04T12:05:39.7658666Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7658931Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7658933Z 2025-12-04T12:05:39.7659018Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7659082Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:05:39.7659143Z ======================= 1 failed, 5 deselected in 11.94s ======================= 2025-12-04T12:05:39.7659180Z Got exit code 1 2025-12-04T12:05:39.7659219Z Retrying single test... 2025-12-04T12:05:39.7659405Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-6293f9b5a57c037f.xml 2025-12-04T12:05:39.7659463Z ============================= test session starts ============================== 2025-12-04T12:05:39.7659575Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:05:39.7659615Z cachedir: .pytest_cache 2025-12-04T12:05:39.7659772Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:05:39.7659818Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:05:39.7659857Z configfile: pytest.ini 2025-12-04T12:05:39.7660018Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:05:39.7660090Z collecting ... collected 10 items / 9 deselected / 1 selected 2025-12-04T12:05:39.7660350Z stepcurrent: skipping 5 already run items. Running only test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7660394Z Running 1 items in this shard 2025-12-04T12:05:39.7660396Z 2025-12-04T12:05:39.7660773Z distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda I1204 12:02:15.231000 422115 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 422184 2025-12-04T12:05:39.7660953Z I1204 12:02:15.232000 422115 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 422185 2025-12-04T12:05:39.7661104Z I1204 12:02:15.233000 422115 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 422186 2025-12-04T12:05:39.7661253Z I1204 12:02:15.233000 422115 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 422187 2025-12-04T12:05:39.7661737Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7661800Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7662278Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7662337Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7662809Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7662893Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7663402Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7663458Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7663599Z [rank2]:E1204 12:02:25.242000 422186 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7663759Z [rank2]:E1204 12:02:25.242000 422186 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7664044Z [rank2]:E1204 12:02:25.242000 422186 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7664196Z [rank2]:E1204 12:02:25.242000 422186 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7664478Z [rank2]:E1204 12:02:25.242000 422186 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7664600Z [rank2]:E1204 12:02:25.242000 422186 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7664873Z [rank2]:E1204 12:02:25.242000 422186 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7665020Z [rank2]:E1204 12:02:25.242000 422186 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7665313Z [rank2]:E1204 12:02:25.242000 422186 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7665460Z [rank2]:E1204 12:02:25.242000 422186 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7665731Z [rank2]:E1204 12:02:25.242000 422186 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7665868Z [rank2]:E1204 12:02:25.242000 422186 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7666140Z [rank2]:E1204 12:02:25.242000 422186 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7666288Z [rank2]:E1204 12:02:25.242000 422186 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7666790Z [rank2]:E1204 12:02:25.242000 422186 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 2. CUDA driver allocated memory was 2300575744 and is now 3036676096. 2025-12-04T12:05:39.7666926Z [rank2]:E1204 12:02:25.242000 422186 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7667118Z [rank2]:E1204 12:02:25.242000 422186 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7667509Z [rank2]:E1204 12:02:25.242000 422186 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7667620Z [rank2]:E1204 12:02:25.242000 422186 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7667828Z [rank2]:E1204 12:02:25.242000 422186 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7667992Z [rank2]:E1204 12:02:25.242000 422186 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:05:39.7668033Z dist init r=2, world=4 2025-12-04T12:05:39.7668169Z [rank0]:E1204 12:02:25.300000 422184 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7668328Z [rank0]:E1204 12:02:25.300000 422184 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7668611Z [rank0]:E1204 12:02:25.300000 422184 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7668764Z [rank0]:E1204 12:02:25.300000 422184 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7669046Z [rank0]:E1204 12:02:25.300000 422184 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7669169Z [rank0]:E1204 12:02:25.300000 422184 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7669462Z [rank0]:E1204 12:02:25.300000 422184 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7669608Z [rank0]:E1204 12:02:25.300000 422184 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7669881Z [rank0]:E1204 12:02:25.300000 422184 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7670026Z [rank0]:E1204 12:02:25.300000 422184 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7670296Z [rank0]:E1204 12:02:25.300000 422184 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7670431Z [rank0]:E1204 12:02:25.300000 422184 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7670750Z [rank0]:E1204 12:02:25.300000 422184 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7670896Z [rank0]:E1204 12:02:25.300000 422184 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7671431Z [rank0]:E1204 12:02:25.300000 422184 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 0. CUDA driver allocated memory was 2459959296 and is now 3196059648. 2025-12-04T12:05:39.7671549Z [rank0]:E1204 12:02:25.300000 422184 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7671740Z [rank0]:E1204 12:02:25.300000 422184 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7672130Z [rank0]:E1204 12:02:25.300000 422184 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7672242Z [rank0]:E1204 12:02:25.300000 422184 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7672450Z [rank0]:E1204 12:02:25.300000 422184 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7672613Z [rank0]:E1204 12:02:25.300000 422184 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:05:39.7672651Z dist init r=0, world=4 2025-12-04T12:05:39.7672789Z [rank1]:E1204 12:02:25.356000 422185 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7672945Z [rank1]:E1204 12:02:25.356000 422185 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7673250Z [rank1]:E1204 12:02:25.356000 422185 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7673403Z [rank1]:E1204 12:02:25.356000 422185 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7673708Z [rank1]:E1204 12:02:25.356000 422185 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7673830Z [rank1]:E1204 12:02:25.356000 422185 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7674103Z [rank1]:E1204 12:02:25.356000 422185 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7674250Z [rank1]:E1204 12:02:25.356000 422185 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7674521Z [rank1]:E1204 12:02:25.356000 422185 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7674667Z [rank1]:E1204 12:02:25.356000 422185 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7674939Z [rank1]:E1204 12:02:25.356000 422185 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7675074Z [rank1]:E1204 12:02:25.356000 422185 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7675369Z [rank1]:E1204 12:02:25.356000 422185 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7675515Z [rank1]:E1204 12:02:25.356000 422185 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7676018Z [rank1]:E1204 12:02:25.356000 422185 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 1. CUDA driver allocated memory was 2317352960 and is now 3053453312. 2025-12-04T12:05:39.7676130Z [rank1]:E1204 12:02:25.356000 422185 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7676322Z [rank1]:E1204 12:02:25.356000 422185 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7676711Z [rank1]:E1204 12:02:25.356000 422185 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7676823Z [rank1]:E1204 12:02:25.356000 422185 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7677030Z [rank1]:E1204 12:02:25.356000 422185 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7677191Z [rank1]:E1204 12:02:25.356000 422185 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:05:39.7677230Z dist init r=1, world=4 2025-12-04T12:05:39.7677368Z [rank3]:E1204 12:02:25.363000 422187 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7677526Z [rank3]:E1204 12:02:25.363000 422187 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7677828Z [rank3]:E1204 12:02:25.363000 422187 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7677981Z [rank3]:E1204 12:02:25.363000 422187 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7678261Z [rank3]:E1204 12:02:25.363000 422187 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7678421Z [rank3]:E1204 12:02:25.363000 422187 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7678694Z [rank3]:E1204 12:02:25.363000 422187 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7678841Z [rank3]:E1204 12:02:25.363000 422187 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7679113Z [rank3]:E1204 12:02:25.363000 422187 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7679257Z [rank3]:E1204 12:02:25.363000 422187 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7679528Z [rank3]:E1204 12:02:25.363000 422187 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7679682Z [rank3]:E1204 12:02:25.363000 422187 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7679960Z [rank3]:E1204 12:02:25.363000 422187 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7680105Z [rank3]:E1204 12:02:25.363000 422187 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7680641Z [rank3]:E1204 12:02:25.363000 422187 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 3. CUDA driver allocated memory was 2250244096 and is now 2986344448. 2025-12-04T12:05:39.7680755Z [rank3]:E1204 12:02:25.363000 422187 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7680946Z [rank3]:E1204 12:02:25.363000 422187 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7681337Z [rank3]:E1204 12:02:25.363000 422187 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7681446Z [rank3]:E1204 12:02:25.363000 422187 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7681653Z [rank3]:E1204 12:02:25.363000 422187 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7681816Z [rank3]:E1204 12:02:25.363000 422187 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:05:39.7681853Z dist init r=3, world=4 2025-12-04T12:05:39.7682212Z [rank0]:[W1204 12:02:25.508556349 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:05:39.7682253Z FAILED [12.1259s] [100%] 2025-12-04T12:05:39.7682255Z 2025-12-04T12:05:39.7682311Z =================================== FAILURES =================================== 2025-12-04T12:05:39.7682441Z _ TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda _ 2025-12-04T12:05:39.7682489Z Traceback (most recent call last): 2025-12-04T12:05:39.7682649Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:05:39.7682693Z self._join_processes(fn) 2025-12-04T12:05:39.7682863Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:05:39.7682918Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:05:39.7683092Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:05:39.7683136Z raise RuntimeError(error) 2025-12-04T12:05:39.7683214Z RuntimeError: Process 2 exited with error code 10 and exception: 2025-12-04T12:05:39.7683259Z Traceback (most recent call last): 2025-12-04T12:05:39.7683417Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7683487Z getattr(self, test_name)() 2025-12-04T12:05:39.7683643Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7683676Z fn() 2025-12-04T12:05:39.7683826Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7683866Z method(*args, **kwargs) 2025-12-04T12:05:39.7684017Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7684057Z method(*args, **kwargs) 2025-12-04T12:05:39.7684206Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7684242Z with policy(): 2025-12-04T12:05:39.7684392Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7684434Z raise RuntimeError(msg) 2025-12-04T12:05:39.7684816Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 2. CUDA driver allocated memory was 2300575744 and is now 3036676096. 2025-12-04T12:05:39.7684818Z 2025-12-04T12:05:39.7684894Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7685162Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7685164Z 2025-12-04T12:05:39.7685249Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7685252Z 2025-12-04T12:05:39.7685254Z 2025-12-04T12:05:39.7685329Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:05:39.7685416Z Process 2 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:05:39.7685643Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-6293f9b5a57c037f.xml - 2025-12-04T12:05:39.7685704Z =========================== short test summary info ============================ 2025-12-04T12:05:39.7686007Z FAILED [12.1259s] distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda - RuntimeError: Process 2 exited with error code 10 and exception: 2025-12-04T12:05:39.7686053Z Traceback (most recent call last): 2025-12-04T12:05:39.7686213Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7686255Z getattr(self, test_name)() 2025-12-04T12:05:39.7686413Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7686447Z fn() 2025-12-04T12:05:39.7686597Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7686638Z method(*args, **kwargs) 2025-12-04T12:05:39.7686788Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7686830Z method(*args, **kwargs) 2025-12-04T12:05:39.7686978Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7687017Z with policy(): 2025-12-04T12:05:39.7687167Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7687207Z raise RuntimeError(msg) 2025-12-04T12:05:39.7687613Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 2. CUDA driver allocated memory was 2300575744 and is now 3036676096. 2025-12-04T12:05:39.7687616Z 2025-12-04T12:05:39.7687690Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7687958Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7687960Z 2025-12-04T12:05:39.7688046Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7688110Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:05:39.7688169Z ======================= 1 failed, 9 deselected in 12.14s ======================= 2025-12-04T12:05:39.7688208Z Got exit code 1 2025-12-04T12:05:39.7688248Z Retrying single test... 2025-12-04T12:05:39.7688436Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-c6267b71acc40011.xml 2025-12-04T12:05:39.7688493Z ============================= test session starts ============================== 2025-12-04T12:05:39.7688608Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:05:39.7688648Z cachedir: .pytest_cache 2025-12-04T12:05:39.7688806Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:05:39.7688853Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:05:39.7688893Z configfile: pytest.ini 2025-12-04T12:05:39.7689054Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:05:39.7689128Z collecting ... collected 10 items / 9 deselected / 1 selected 2025-12-04T12:05:39.7689389Z stepcurrent: skipping 5 already run items. Running only test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7689431Z Running 1 items in this shard 2025-12-04T12:05:39.7689433Z 2025-12-04T12:05:39.7689795Z distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda I1204 12:02:30.078000 422517 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 422586 2025-12-04T12:05:39.7689948Z I1204 12:02:30.078000 422517 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 422587 2025-12-04T12:05:39.7690099Z I1204 12:02:30.079000 422517 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 422588 2025-12-04T12:05:39.7690251Z I1204 12:02:30.079000 422517 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 422589 2025-12-04T12:05:39.7690785Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7690848Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7691330Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7691421Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7691899Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7691958Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7692433Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7692491Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7692632Z [rank1]:E1204 12:02:39.913000 422587 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7692791Z [rank1]:E1204 12:02:39.913000 422587 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7693077Z [rank1]:E1204 12:02:39.913000 422587 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7693228Z [rank1]:E1204 12:02:39.913000 422587 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7693514Z [rank1]:E1204 12:02:39.913000 422587 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7693638Z [rank1]:E1204 12:02:39.913000 422587 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7693936Z [rank1]:E1204 12:02:39.913000 422587 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7694082Z [rank1]:E1204 12:02:39.913000 422587 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7694352Z [rank1]:E1204 12:02:39.913000 422587 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7694497Z [rank1]:E1204 12:02:39.913000 422587 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7694770Z [rank1]:E1204 12:02:39.913000 422587 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7694906Z [rank1]:E1204 12:02:39.913000 422587 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7695179Z [rank1]:E1204 12:02:39.913000 422587 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7695323Z [rank1]:E1204 12:02:39.913000 422587 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7695828Z [rank1]:E1204 12:02:39.913000 422587 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 1. CUDA driver allocated memory was 2317352960 and is now 3053453312. 2025-12-04T12:05:39.7695965Z [rank1]:E1204 12:02:39.913000 422587 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7696160Z [rank1]:E1204 12:02:39.913000 422587 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7696548Z [rank1]:E1204 12:02:39.913000 422587 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7696661Z [rank1]:E1204 12:02:39.913000 422587 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7696869Z [rank1]:E1204 12:02:39.913000 422587 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7697029Z [rank1]:E1204 12:02:39.913000 422587 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:05:39.7697069Z dist init r=1, world=4 2025-12-04T12:05:39.7697205Z [rank3]:E1204 12:02:40.021000 422589 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7697362Z [rank3]:E1204 12:02:40.021000 422589 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7697643Z [rank3]:E1204 12:02:40.021000 422589 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7697796Z [rank3]:E1204 12:02:40.021000 422589 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7698097Z [rank3]:E1204 12:02:40.021000 422589 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7698221Z [rank3]:E1204 12:02:40.021000 422589 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7698494Z [rank3]:E1204 12:02:40.021000 422589 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7698640Z [rank3]:E1204 12:02:40.021000 422589 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7698914Z [rank3]:E1204 12:02:40.021000 422589 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7699059Z [rank3]:E1204 12:02:40.021000 422589 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7699333Z [rank3]:E1204 12:02:40.021000 422589 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7699467Z [rank3]:E1204 12:02:40.021000 422589 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7699742Z [rank3]:E1204 12:02:40.021000 422589 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7699912Z [rank3]:E1204 12:02:40.021000 422589 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7700417Z [rank3]:E1204 12:02:40.021000 422589 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 3. CUDA driver allocated memory was 2250244096 and is now 2986344448. 2025-12-04T12:05:39.7700532Z [rank3]:E1204 12:02:40.021000 422589 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7700757Z [rank3]:E1204 12:02:40.021000 422589 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7701151Z [rank3]:E1204 12:02:40.021000 422589 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7701262Z [rank3]:E1204 12:02:40.021000 422589 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7701472Z [rank3]:E1204 12:02:40.021000 422589 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7701634Z [rank3]:E1204 12:02:40.021000 422589 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:05:39.7701671Z dist init r=3, world=4 2025-12-04T12:05:39.7701808Z [rank2]:E1204 12:02:40.093000 422588 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7701967Z [rank2]:E1204 12:02:40.093000 422588 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7702252Z [rank2]:E1204 12:02:40.093000 422588 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7702437Z [rank2]:E1204 12:02:40.093000 422588 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7702722Z [rank2]:E1204 12:02:40.093000 422588 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7702843Z [rank2]:E1204 12:02:40.093000 422588 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7703117Z [rank2]:E1204 12:02:40.093000 422588 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7703263Z [rank2]:E1204 12:02:40.093000 422588 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7703535Z [rank2]:E1204 12:02:40.093000 422588 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7703680Z [rank2]:E1204 12:02:40.093000 422588 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7703950Z [rank2]:E1204 12:02:40.093000 422588 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7704113Z [rank2]:E1204 12:02:40.093000 422588 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7704385Z [rank2]:E1204 12:02:40.093000 422588 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7704533Z [rank2]:E1204 12:02:40.093000 422588 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7705037Z [rank2]:E1204 12:02:40.093000 422588 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 2. CUDA driver allocated memory was 2300575744 and is now 3036676096. 2025-12-04T12:05:39.7705151Z [rank2]:E1204 12:02:40.093000 422588 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7705344Z [rank2]:E1204 12:02:40.093000 422588 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7705734Z [rank2]:E1204 12:02:40.093000 422588 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7705846Z [rank2]:E1204 12:02:40.093000 422588 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7706053Z [rank2]:E1204 12:02:40.093000 422588 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7706215Z [rank2]:E1204 12:02:40.093000 422588 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:05:39.7706254Z dist init r=2, world=4 2025-12-04T12:05:39.7706389Z [rank0]:E1204 12:02:40.127000 422586 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7706628Z [rank0]:E1204 12:02:40.127000 422586 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7706911Z [rank0]:E1204 12:02:40.127000 422586 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7707062Z [rank0]:E1204 12:02:40.127000 422586 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7707345Z [rank0]:E1204 12:02:40.127000 422586 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7707469Z [rank0]:E1204 12:02:40.127000 422586 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7707743Z [rank0]:E1204 12:02:40.127000 422586 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7707888Z [rank0]:E1204 12:02:40.127000 422586 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7708161Z [rank0]:E1204 12:02:40.127000 422586 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7708327Z [rank0]:E1204 12:02:40.127000 422586 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7708599Z [rank0]:E1204 12:02:40.127000 422586 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7708733Z [rank0]:E1204 12:02:40.127000 422586 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7709008Z [rank0]:E1204 12:02:40.127000 422586 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7709153Z [rank0]:E1204 12:02:40.127000 422586 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7709656Z [rank0]:E1204 12:02:40.127000 422586 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 0. CUDA driver allocated memory was 2459959296 and is now 3196059648. 2025-12-04T12:05:39.7709771Z [rank0]:E1204 12:02:40.127000 422586 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7709962Z [rank0]:E1204 12:02:40.127000 422586 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7710350Z [rank0]:E1204 12:02:40.127000 422586 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7710461Z [rank0]:E1204 12:02:40.127000 422586 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7710701Z [rank0]:E1204 12:02:40.127000 422586 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7710891Z [rank0]:E1204 12:02:40.127000 422586 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:05:39.7710929Z dist init r=0, world=4 2025-12-04T12:05:39.7711259Z [rank0]:[W1204 12:02:40.384451157 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:05:39.7711299Z FAILED [11.8261s] [100%] 2025-12-04T12:05:39.7711303Z 2025-12-04T12:05:39.7711359Z =================================== FAILURES =================================== 2025-12-04T12:05:39.7711490Z _ TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda _ 2025-12-04T12:05:39.7711536Z Traceback (most recent call last): 2025-12-04T12:05:39.7711695Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:05:39.7711741Z self._join_processes(fn) 2025-12-04T12:05:39.7711911Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:05:39.7711965Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:05:39.7712141Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:05:39.7712184Z raise RuntimeError(error) 2025-12-04T12:05:39.7712299Z RuntimeError: Process 1 exited with error code 10 and exception: 2025-12-04T12:05:39.7712346Z Traceback (most recent call last): 2025-12-04T12:05:39.7712504Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7712549Z getattr(self, test_name)() 2025-12-04T12:05:39.7712707Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7712743Z fn() 2025-12-04T12:05:39.7712894Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7712934Z method(*args, **kwargs) 2025-12-04T12:05:39.7713085Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7713124Z method(*args, **kwargs) 2025-12-04T12:05:39.7713272Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7713310Z with policy(): 2025-12-04T12:05:39.7713460Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7713499Z raise RuntimeError(msg) 2025-12-04T12:05:39.7713881Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 1. CUDA driver allocated memory was 2317352960 and is now 3053453312. 2025-12-04T12:05:39.7713883Z 2025-12-04T12:05:39.7713958Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7714226Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7714230Z 2025-12-04T12:05:39.7714319Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7714321Z 2025-12-04T12:05:39.7714323Z 2025-12-04T12:05:39.7714397Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:05:39.7714485Z Process 1 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:05:39.7714734Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-c6267b71acc40011.xml - 2025-12-04T12:05:39.7714795Z =========================== short test summary info ============================ 2025-12-04T12:05:39.7715073Z FAILED [11.8261s] distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda - RuntimeError: Process 1 exited with error code 10 and exception: 2025-12-04T12:05:39.7715121Z Traceback (most recent call last): 2025-12-04T12:05:39.7715281Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7715324Z getattr(self, test_name)() 2025-12-04T12:05:39.7715479Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7715514Z fn() 2025-12-04T12:05:39.7715663Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7715704Z method(*args, **kwargs) 2025-12-04T12:05:39.7715852Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7715892Z method(*args, **kwargs) 2025-12-04T12:05:39.7716041Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7716114Z with policy(): 2025-12-04T12:05:39.7716264Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7716303Z raise RuntimeError(msg) 2025-12-04T12:05:39.7716692Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 1. CUDA driver allocated memory was 2317352960 and is now 3053453312. 2025-12-04T12:05:39.7716694Z 2025-12-04T12:05:39.7716768Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7717033Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7717036Z 2025-12-04T12:05:39.7717123Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7717187Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:05:39.7717246Z ======================= 1 failed, 9 deselected in 11.84s ======================= 2025-12-04T12:05:39.7717284Z Got exit code 1 2025-12-04T12:05:39.7717500Z FAILED CONSISTENTLY: test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda 2025-12-04T12:05:39.7717628Z Test failed consistently, continuing with the rest of the tests due to continue-through-error being set 2025-12-04T12:05:39.7717816Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-40d8868a4a29e535.xml 2025-12-04T12:05:39.7717872Z ============================= test session starts ============================== 2025-12-04T12:05:39.7717984Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:05:39.7718028Z cachedir: .pytest_cache 2025-12-04T12:05:39.7718184Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:05:39.7718231Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:05:39.7718274Z configfile: pytest.ini 2025-12-04T12:05:39.7718453Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:05:39.7718526Z collecting ... collected 10 items / 6 deselected / 4 selected 2025-12-04T12:05:39.7718579Z stepcurrent: skipping 6 already run items. 2025-12-04T12:05:39.7718622Z Running 4 items in this shard 2025-12-04T12:05:39.7718624Z 2025-12-04T12:05:39.7718961Z distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda I1204 12:02:44.738000 422919 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 422988 2025-12-04T12:05:39.7719116Z I1204 12:02:44.739000 422919 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 422989 2025-12-04T12:05:39.7719266Z I1204 12:02:44.740000 422919 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 422990 2025-12-04T12:05:39.7719417Z I1204 12:02:44.740000 422919 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 422991 2025-12-04T12:05:39.7719902Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7719986Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7720465Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7720523Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7721067Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7721128Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7721605Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7721667Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7721809Z [rank0]:E1204 12:02:54.728000 422988 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7721972Z [rank0]:E1204 12:02:54.728000 422988 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7722257Z [rank0]:E1204 12:02:54.728000 422988 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7722413Z [rank0]:E1204 12:02:54.728000 422988 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7722722Z [rank0]:E1204 12:02:54.728000 422988 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7722847Z [rank0]:E1204 12:02:54.728000 422988 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7723120Z [rank0]:E1204 12:02:54.728000 422988 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7723264Z [rank0]:E1204 12:02:54.728000 422988 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7723540Z [rank0]:E1204 12:02:54.728000 422988 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7723685Z [rank0]:E1204 12:02:54.728000 422988 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7723959Z [rank0]:E1204 12:02:54.728000 422988 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7726452Z [rank0]:E1204 12:02:54.728000 422988 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7726735Z [rank0]:E1204 12:02:54.728000 422988 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7726911Z [rank0]:E1204 12:02:54.728000 422988 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7727418Z [rank0]:E1204 12:02:54.728000 422988 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 0. CUDA driver allocated memory was 2459959296 and is now 3196059648. 2025-12-04T12:05:39.7727534Z [rank0]:E1204 12:02:54.728000 422988 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7727761Z [rank0]:E1204 12:02:54.728000 422988 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7728154Z [rank0]:E1204 12:02:54.728000 422988 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7728267Z [rank0]:E1204 12:02:54.728000 422988 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7728480Z [rank0]:E1204 12:02:54.728000 422988 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7728644Z [rank0]:E1204 12:02:54.728000 422988 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:05:39.7728686Z dist init r=0, world=4 2025-12-04T12:05:39.7728826Z [rank2]:E1204 12:02:54.765000 422990 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7728986Z [rank2]:E1204 12:02:54.765000 422990 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7729269Z [rank2]:E1204 12:02:54.765000 422990 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7729443Z [rank2]:E1204 12:02:54.765000 422990 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7729727Z [rank2]:E1204 12:02:54.765000 422990 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7729849Z [rank2]:E1204 12:02:54.765000 422990 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7730123Z [rank2]:E1204 12:02:54.765000 422990 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7730269Z [rank2]:E1204 12:02:54.765000 422990 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7730544Z [rank2]:E1204 12:02:54.765000 422990 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7730729Z [rank2]:E1204 12:02:54.765000 422990 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7731073Z [rank2]:E1204 12:02:54.765000 422990 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7731226Z [rank2]:E1204 12:02:54.765000 422990 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7731498Z [rank2]:E1204 12:02:54.765000 422990 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7731646Z [rank2]:E1204 12:02:54.765000 422990 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7732147Z [rank2]:E1204 12:02:54.765000 422990 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 2. CUDA driver allocated memory was 2300575744 and is now 3036676096. 2025-12-04T12:05:39.7732262Z [rank2]:E1204 12:02:54.765000 422990 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7732455Z [rank2]:E1204 12:02:54.765000 422990 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7732847Z [rank2]:E1204 12:02:54.765000 422990 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7732963Z [rank2]:E1204 12:02:54.765000 422990 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7733171Z [rank2]:E1204 12:02:54.765000 422990 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7733337Z [rank2]:E1204 12:02:54.765000 422990 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:05:39.7733378Z dist init r=2, world=4 2025-12-04T12:05:39.7733515Z [rank1]:E1204 12:02:54.855000 422989 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7733705Z [rank1]:E1204 12:02:54.855000 422989 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7733989Z [rank1]:E1204 12:02:54.855000 422989 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7734143Z [rank1]:E1204 12:02:54.855000 422989 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7734424Z [rank1]:E1204 12:02:54.855000 422989 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7734547Z [rank1]:E1204 12:02:54.855000 422989 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7734820Z [rank1]:E1204 12:02:54.855000 422989 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7734965Z [rank1]:E1204 12:02:54.855000 422989 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7735257Z [rank1]:E1204 12:02:54.855000 422989 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7735416Z [rank1]:E1204 12:02:54.855000 422989 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7735693Z [rank1]:E1204 12:02:54.855000 422989 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7735830Z [rank1]:E1204 12:02:54.855000 422989 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7736106Z [rank1]:E1204 12:02:54.855000 422989 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7736252Z [rank1]:E1204 12:02:54.855000 422989 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7736758Z [rank1]:E1204 12:02:54.855000 422989 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 1. CUDA driver allocated memory was 2317352960 and is now 3053453312. 2025-12-04T12:05:39.7736876Z [rank1]:E1204 12:02:54.855000 422989 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7737069Z [rank1]:E1204 12:02:54.855000 422989 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7737465Z [rank1]:E1204 12:02:54.855000 422989 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7737577Z [rank1]:E1204 12:02:54.855000 422989 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7737788Z [rank1]:E1204 12:02:54.855000 422989 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7737968Z [rank1]:E1204 12:02:54.855000 422989 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:05:39.7738009Z dist init r=1, world=4 2025-12-04T12:05:39.7738146Z [rank3]:E1204 12:02:54.856000 422991 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7738336Z [rank3]:E1204 12:02:54.856000 422991 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7738619Z [rank3]:E1204 12:02:54.856000 422991 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7738771Z [rank3]:E1204 12:02:54.856000 422991 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7739055Z [rank3]:E1204 12:02:54.856000 422991 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7739178Z [rank3]:E1204 12:02:54.856000 422991 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7739466Z [rank3]:E1204 12:02:54.856000 422991 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7739623Z [rank3]:E1204 12:02:54.856000 422991 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7739902Z [rank3]:E1204 12:02:54.856000 422991 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7740053Z [rank3]:E1204 12:02:54.856000 422991 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7740324Z [rank3]:E1204 12:02:54.856000 422991 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7740463Z [rank3]:E1204 12:02:54.856000 422991 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7740767Z [rank3]:E1204 12:02:54.856000 422991 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7740915Z [rank3]:E1204 12:02:54.856000 422991 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7741414Z [rank3]:E1204 12:02:54.856000 422991 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 3. CUDA driver allocated memory was 2250244096 and is now 2986344448. 2025-12-04T12:05:39.7741529Z [rank3]:E1204 12:02:54.856000 422991 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7741722Z [rank3]:E1204 12:02:54.856000 422991 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7742109Z [rank3]:E1204 12:02:54.856000 422991 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7742248Z [rank3]:E1204 12:02:54.856000 422991 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7742457Z [rank3]:E1204 12:02:54.856000 422991 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7742621Z [rank3]:E1204 12:02:54.856000 422991 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:05:39.7742658Z dist init r=3, world=4 2025-12-04T12:05:39.7742991Z [rank0]:[W1204 12:02:55.015406795 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:05:39.7743032Z FAILED [12.2257s] [ 25%] 2025-12-04T12:05:39.7743035Z 2025-12-04T12:05:39.7743089Z =================================== FAILURES =================================== 2025-12-04T12:05:39.7743222Z _ TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda _ 2025-12-04T12:05:39.7743267Z Traceback (most recent call last): 2025-12-04T12:05:39.7743427Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:05:39.7743489Z self._join_processes(fn) 2025-12-04T12:05:39.7743661Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:05:39.7743730Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:05:39.7743907Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:05:39.7743952Z raise RuntimeError(error) 2025-12-04T12:05:39.7744033Z RuntimeError: Process 2 exited with error code 10 and exception: 2025-12-04T12:05:39.7744080Z Traceback (most recent call last): 2025-12-04T12:05:39.7744242Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7744284Z getattr(self, test_name)() 2025-12-04T12:05:39.7744442Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7744477Z fn() 2025-12-04T12:05:39.7744627Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7744668Z method(*args, **kwargs) 2025-12-04T12:05:39.7744818Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7744858Z method(*args, **kwargs) 2025-12-04T12:05:39.7745008Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7745048Z with policy(): 2025-12-04T12:05:39.7745197Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7745239Z raise RuntimeError(msg) 2025-12-04T12:05:39.7745615Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 2. CUDA driver allocated memory was 2300575744 and is now 3036676096. 2025-12-04T12:05:39.7745619Z 2025-12-04T12:05:39.7745694Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7745957Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7745959Z 2025-12-04T12:05:39.7746069Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7746071Z 2025-12-04T12:05:39.7746073Z 2025-12-04T12:05:39.7746148Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:05:39.7746235Z Process 2 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:05:39.7746469Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-40d8868a4a29e535.xml - 2025-12-04T12:05:39.7746529Z =========================== short test summary info ============================ 2025-12-04T12:05:39.7746810Z FAILED [12.2257s] distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda - RuntimeError: Process 2 exited with error code 10 and exception: 2025-12-04T12:05:39.7746855Z Traceback (most recent call last): 2025-12-04T12:05:39.7747022Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7747066Z getattr(self, test_name)() 2025-12-04T12:05:39.7747224Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7747271Z fn() 2025-12-04T12:05:39.7747423Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7747474Z method(*args, **kwargs) 2025-12-04T12:05:39.7747624Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7747663Z method(*args, **kwargs) 2025-12-04T12:05:39.7747813Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7747851Z with policy(): 2025-12-04T12:05:39.7748003Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7748044Z raise RuntimeError(msg) 2025-12-04T12:05:39.7748422Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 2. CUDA driver allocated memory was 2300575744 and is now 3036676096. 2025-12-04T12:05:39.7748426Z 2025-12-04T12:05:39.7748501Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7748765Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7748767Z 2025-12-04T12:05:39.7748853Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7748917Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:05:39.7748979Z ======================= 1 failed, 6 deselected in 12.24s ======================= 2025-12-04T12:05:39.7749015Z Got exit code 1 2025-12-04T12:05:39.7749056Z Retrying single test... 2025-12-04T12:05:39.7749241Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-509d336e3270e699.xml 2025-12-04T12:05:39.7749299Z ============================= test session starts ============================== 2025-12-04T12:05:39.7749414Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:05:39.7749457Z cachedir: .pytest_cache 2025-12-04T12:05:39.7749613Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:05:39.7749659Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:05:39.7749698Z configfile: pytest.ini 2025-12-04T12:05:39.7749886Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:05:39.7749959Z collecting ... collected 10 items / 9 deselected / 1 selected 2025-12-04T12:05:39.7750347Z stepcurrent: skipping 6 already run items. Running only test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7750391Z Running 1 items in this shard 2025-12-04T12:05:39.7750395Z 2025-12-04T12:05:39.7750784Z distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda I1204 12:02:59.891000 423321 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 423390 2025-12-04T12:05:39.7750939Z I1204 12:02:59.892000 423321 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 423391 2025-12-04T12:05:39.7751091Z I1204 12:02:59.892000 423321 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 423392 2025-12-04T12:05:39.7751238Z I1204 12:02:59.893000 423321 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 423393 2025-12-04T12:05:39.7751749Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7751827Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7752309Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7752370Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7752846Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7752903Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7753379Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7753436Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7753578Z [rank1]:E1204 12:03:09.772000 423391 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7753737Z [rank1]:E1204 12:03:09.772000 423391 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7754022Z [rank1]:E1204 12:03:09.772000 423391 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7754202Z [rank1]:E1204 12:03:09.772000 423391 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7754486Z [rank1]:E1204 12:03:09.772000 423391 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7754612Z [rank1]:E1204 12:03:09.772000 423391 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7754885Z [rank1]:E1204 12:03:09.772000 423391 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7755036Z [rank1]:E1204 12:03:09.772000 423391 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7755314Z [rank1]:E1204 12:03:09.772000 423391 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7755458Z [rank1]:E1204 12:03:09.772000 423391 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7755729Z [rank1]:E1204 12:03:09.772000 423391 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7755878Z [rank1]:E1204 12:03:09.772000 423391 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7756166Z [rank1]:E1204 12:03:09.772000 423391 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7756312Z [rank1]:E1204 12:03:09.772000 423391 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7756822Z [rank1]:E1204 12:03:09.772000 423391 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 1. CUDA driver allocated memory was 2317352960 and is now 3053453312. 2025-12-04T12:05:39.7756939Z [rank1]:E1204 12:03:09.772000 423391 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7757131Z [rank1]:E1204 12:03:09.772000 423391 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7757522Z [rank1]:E1204 12:03:09.772000 423391 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7757634Z [rank1]:E1204 12:03:09.772000 423391 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7757845Z [rank1]:E1204 12:03:09.772000 423391 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7758008Z [rank1]:E1204 12:03:09.772000 423391 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:05:39.7758047Z dist init r=1, world=4 2025-12-04T12:05:39.7758184Z [rank3]:E1204 12:03:09.860000 423393 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7758371Z [rank3]:E1204 12:03:09.860000 423393 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7758675Z [rank3]:E1204 12:03:09.860000 423393 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7758825Z [rank3]:E1204 12:03:09.860000 423393 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7759110Z [rank3]:E1204 12:03:09.860000 423393 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7759233Z [rank3]:E1204 12:03:09.860000 423393 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7759507Z [rank3]:E1204 12:03:09.860000 423393 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7759654Z [rank3]:E1204 12:03:09.860000 423393 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7759925Z [rank3]:E1204 12:03:09.860000 423393 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7760084Z [rank3]:E1204 12:03:09.860000 423393 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7760367Z [rank3]:E1204 12:03:09.860000 423393 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7760501Z [rank3]:E1204 12:03:09.860000 423393 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7760811Z [rank3]:E1204 12:03:09.860000 423393 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7760957Z [rank3]:E1204 12:03:09.860000 423393 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7761461Z [rank3]:E1204 12:03:09.860000 423393 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 3. CUDA driver allocated memory was 2250244096 and is now 2986344448. 2025-12-04T12:05:39.7761574Z [rank3]:E1204 12:03:09.860000 423393 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7761769Z [rank3]:E1204 12:03:09.860000 423393 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7762161Z [rank3]:E1204 12:03:09.860000 423393 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7762273Z [rank3]:E1204 12:03:09.860000 423393 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7762481Z [rank3]:E1204 12:03:09.860000 423393 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7762644Z [rank3]:E1204 12:03:09.860000 423393 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:05:39.7762712Z dist init r=3, world=4 2025-12-04T12:05:39.7762848Z [rank2]:E1204 12:03:09.865000 423392 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7763004Z [rank2]:E1204 12:03:09.865000 423392 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7763286Z [rank2]:E1204 12:03:09.865000 423392 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7763439Z [rank2]:E1204 12:03:09.865000 423392 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7763721Z [rank2]:E1204 12:03:09.865000 423392 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7763844Z [rank2]:E1204 12:03:09.865000 423392 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7764115Z [rank2]:E1204 12:03:09.865000 423392 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7764274Z [rank2]:E1204 12:03:09.865000 423392 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7764566Z [rank2]:E1204 12:03:09.865000 423392 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7764710Z [rank2]:E1204 12:03:09.865000 423392 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7764983Z [rank2]:E1204 12:03:09.865000 423392 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7765115Z [rank2]:E1204 12:03:09.865000 423392 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7765389Z [rank2]:E1204 12:03:09.865000 423392 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7765535Z [rank2]:E1204 12:03:09.865000 423392 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7766040Z [rank2]:E1204 12:03:09.865000 423392 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 2. CUDA driver allocated memory was 2300575744 and is now 3036676096. 2025-12-04T12:05:39.7766153Z [rank2]:E1204 12:03:09.865000 423392 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7766345Z [rank2]:E1204 12:03:09.865000 423392 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7766733Z [rank2]:E1204 12:03:09.865000 423392 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7766843Z [rank2]:E1204 12:03:09.865000 423392 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7767069Z [rank2]:E1204 12:03:09.865000 423392 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7767229Z [rank2]:E1204 12:03:09.865000 423392 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:05:39.7767269Z dist init r=2, world=4 2025-12-04T12:05:39.7767404Z [rank0]:E1204 12:03:09.873000 423390 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7767562Z [rank0]:E1204 12:03:09.873000 423390 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7767844Z [rank0]:E1204 12:03:09.873000 423390 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7767996Z [rank0]:E1204 12:03:09.873000 423390 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7768278Z [rank0]:E1204 12:03:09.873000 423390 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7768411Z [rank0]:E1204 12:03:09.873000 423390 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7768696Z [rank0]:E1204 12:03:09.873000 423390 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7768840Z [rank0]:E1204 12:03:09.873000 423390 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7769115Z [rank0]:E1204 12:03:09.873000 423390 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7769260Z [rank0]:E1204 12:03:09.873000 423390 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7769532Z [rank0]:E1204 12:03:09.873000 423390 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7769668Z [rank0]:E1204 12:03:09.873000 423390 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7769939Z [rank0]:E1204 12:03:09.873000 423390 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7770087Z [rank0]:E1204 12:03:09.873000 423390 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7770591Z [rank0]:E1204 12:03:09.873000 423390 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 0. CUDA driver allocated memory was 2459959296 and is now 3196059648. 2025-12-04T12:05:39.7770747Z [rank0]:E1204 12:03:09.873000 423390 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7770939Z [rank0]:E1204 12:03:09.873000 423390 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7771353Z [rank0]:E1204 12:03:09.873000 423390 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7771466Z [rank0]:E1204 12:03:09.873000 423390 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7771673Z [rank0]:E1204 12:03:09.873000 423390 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7771836Z [rank0]:E1204 12:03:09.873000 423390 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:05:39.7771873Z dist init r=0, world=4 2025-12-04T12:05:39.7772207Z [rank0]:[W1204 12:03:10.000176659 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:05:39.7772247Z FAILED [12.1245s] [100%] 2025-12-04T12:05:39.7772249Z 2025-12-04T12:05:39.7772303Z =================================== FAILURES =================================== 2025-12-04T12:05:39.7772434Z _ TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda _ 2025-12-04T12:05:39.7772493Z Traceback (most recent call last): 2025-12-04T12:05:39.7772654Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:05:39.7772710Z self._join_processes(fn) 2025-12-04T12:05:39.7775188Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:05:39.7775252Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:05:39.7775436Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:05:39.7775480Z raise RuntimeError(error) 2025-12-04T12:05:39.7775561Z RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7775606Z Traceback (most recent call last): 2025-12-04T12:05:39.7775769Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7775812Z getattr(self, test_name)() 2025-12-04T12:05:39.7775971Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7776007Z fn() 2025-12-04T12:05:39.7776158Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7776198Z method(*args, **kwargs) 2025-12-04T12:05:39.7776350Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7776390Z method(*args, **kwargs) 2025-12-04T12:05:39.7776538Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7776576Z with policy(): 2025-12-04T12:05:39.7776726Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7776770Z raise RuntimeError(msg) 2025-12-04T12:05:39.7777149Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 0. CUDA driver allocated memory was 2459959296 and is now 3196059648. 2025-12-04T12:05:39.7777153Z 2025-12-04T12:05:39.7777231Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7777535Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7777538Z 2025-12-04T12:05:39.7777628Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7777631Z 2025-12-04T12:05:39.7777633Z 2025-12-04T12:05:39.7777710Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:05:39.7777797Z Process 0 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:05:39.7778031Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-509d336e3270e699.xml - 2025-12-04T12:05:39.7778091Z =========================== short test summary info ============================ 2025-12-04T12:05:39.7778372Z FAILED [12.1245s] distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda - RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7778418Z Traceback (most recent call last): 2025-12-04T12:05:39.7778582Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7778641Z getattr(self, test_name)() 2025-12-04T12:05:39.7778800Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7778848Z fn() 2025-12-04T12:05:39.7778998Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7779038Z method(*args, **kwargs) 2025-12-04T12:05:39.7779188Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7779227Z method(*args, **kwargs) 2025-12-04T12:05:39.7779376Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7779413Z with policy(): 2025-12-04T12:05:39.7779563Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7779604Z raise RuntimeError(msg) 2025-12-04T12:05:39.7779984Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 0. CUDA driver allocated memory was 2459959296 and is now 3196059648. 2025-12-04T12:05:39.7779987Z 2025-12-04T12:05:39.7780062Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7780328Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7780330Z 2025-12-04T12:05:39.7780417Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7780479Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:05:39.7780543Z ======================= 1 failed, 9 deselected in 12.14s ======================= 2025-12-04T12:05:39.7780579Z Got exit code 1 2025-12-04T12:05:39.7780669Z Retrying single test... 2025-12-04T12:05:39.7780856Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-bb05bbb845985de5.xml 2025-12-04T12:05:39.7780916Z ============================= test session starts ============================== 2025-12-04T12:05:39.7781030Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:05:39.7781071Z cachedir: .pytest_cache 2025-12-04T12:05:39.7781270Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:05:39.7781317Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:05:39.7781357Z configfile: pytest.ini 2025-12-04T12:05:39.7781519Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:05:39.7781591Z collecting ... collected 10 items / 9 deselected / 1 selected 2025-12-04T12:05:39.7781849Z stepcurrent: skipping 6 already run items. Running only test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7781893Z Running 1 items in this shard 2025-12-04T12:05:39.7781895Z 2025-12-04T12:05:39.7782237Z distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda I1204 12:03:14.618000 423723 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 423792 2025-12-04T12:05:39.7782392Z I1204 12:03:14.619000 423723 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 423793 2025-12-04T12:05:39.7782556Z I1204 12:03:14.620000 423723 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 423794 2025-12-04T12:05:39.7782705Z I1204 12:03:14.620000 423723 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 423795 2025-12-04T12:05:39.7783211Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7783275Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7783757Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7783817Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7784295Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7784353Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7784834Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7784892Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7785036Z [rank0]:E1204 12:03:24.621000 423792 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7785196Z [rank0]:E1204 12:03:24.621000 423792 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7785504Z [rank0]:E1204 12:03:24.621000 423792 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7785657Z [rank0]:E1204 12:03:24.621000 423792 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7785939Z [rank0]:E1204 12:03:24.621000 423792 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7786065Z [rank0]:E1204 12:03:24.621000 423792 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7786337Z [rank0]:E1204 12:03:24.621000 423792 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7786486Z [rank0]:E1204 12:03:24.621000 423792 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7786760Z [rank0]:E1204 12:03:24.621000 423792 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7786919Z [rank0]:E1204 12:03:24.621000 423792 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7787191Z [rank0]:E1204 12:03:24.621000 423792 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7787336Z [rank0]:E1204 12:03:24.621000 423792 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7787613Z [rank0]:E1204 12:03:24.621000 423792 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7787758Z [rank0]:E1204 12:03:24.621000 423792 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7788263Z [rank0]:E1204 12:03:24.621000 423792 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 0. CUDA driver allocated memory was 2459959296 and is now 3196059648. 2025-12-04T12:05:39.7788380Z [rank0]:E1204 12:03:24.621000 423792 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7788574Z [rank0]:E1204 12:03:24.621000 423792 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7788964Z [rank0]:E1204 12:03:24.621000 423792 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7789078Z [rank0]:E1204 12:03:24.621000 423792 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7789289Z [rank0]:E1204 12:03:24.621000 423792 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7789451Z [rank0]:E1204 12:03:24.621000 423792 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:05:39.7789489Z dist init r=0, world=4 2025-12-04T12:05:39.7789645Z [rank1]:E1204 12:03:24.666000 423793 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7789802Z [rank1]:E1204 12:03:24.666000 423793 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7790086Z [rank1]:E1204 12:03:24.666000 423793 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7790238Z [rank1]:E1204 12:03:24.666000 423793 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7790521Z [rank1]:E1204 12:03:24.666000 423793 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7790679Z [rank1]:E1204 12:03:24.666000 423793 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7790953Z [rank1]:E1204 12:03:24.666000 423793 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7791114Z [rank1]:E1204 12:03:24.666000 423793 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7791387Z [rank1]:E1204 12:03:24.666000 423793 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7791547Z [rank1]:E1204 12:03:24.666000 423793 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7791819Z [rank1]:E1204 12:03:24.666000 423793 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7791954Z [rank1]:E1204 12:03:24.666000 423793 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7792225Z [rank1]:E1204 12:03:24.666000 423793 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7792373Z [rank1]:E1204 12:03:24.666000 423793 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7792877Z [rank1]:E1204 12:03:24.666000 423793 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 1. CUDA driver allocated memory was 2317352960 and is now 3053453312. 2025-12-04T12:05:39.7792989Z [rank1]:E1204 12:03:24.666000 423793 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7793183Z [rank1]:E1204 12:03:24.666000 423793 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7793573Z [rank1]:E1204 12:03:24.666000 423793 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7793687Z [rank1]:E1204 12:03:24.666000 423793 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7793920Z [rank1]:E1204 12:03:24.666000 423793 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7794084Z [rank1]:E1204 12:03:24.666000 423793 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:05:39.7794122Z dist init r=1, world=4 2025-12-04T12:05:39.7794258Z [rank3]:E1204 12:03:24.685000 423795 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7794415Z [rank3]:E1204 12:03:24.685000 423795 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7794698Z [rank3]:E1204 12:03:24.685000 423795 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7794850Z [rank3]:E1204 12:03:24.685000 423795 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7795131Z [rank3]:E1204 12:03:24.685000 423795 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7795266Z [rank3]:E1204 12:03:24.685000 423795 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7795536Z [rank3]:E1204 12:03:24.685000 423795 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7795699Z [rank3]:E1204 12:03:24.685000 423795 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7795974Z [rank3]:E1204 12:03:24.685000 423795 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7796120Z [rank3]:E1204 12:03:24.685000 423795 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7796391Z [rank3]:E1204 12:03:24.685000 423795 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7796527Z [rank3]:E1204 12:03:24.685000 423795 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7796802Z [rank3]:E1204 12:03:24.685000 423795 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7796947Z [rank3]:E1204 12:03:24.685000 423795 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7797449Z [rank3]:E1204 12:03:24.685000 423795 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 3. CUDA driver allocated memory was 2250244096 and is now 2986344448. 2025-12-04T12:05:39.7797564Z [rank3]:E1204 12:03:24.685000 423795 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7797756Z [rank3]:E1204 12:03:24.685000 423795 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7798166Z [rank3]:E1204 12:03:24.685000 423795 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7798311Z [rank3]:E1204 12:03:24.685000 423795 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7798522Z [rank3]:E1204 12:03:24.685000 423795 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7798684Z [rank3]:E1204 12:03:24.685000 423795 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:05:39.7798724Z dist init r=3, world=4 2025-12-04T12:05:39.7798860Z [rank2]:E1204 12:03:24.696000 423794 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7799016Z [rank2]:E1204 12:03:24.696000 423794 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7799299Z [rank2]:E1204 12:03:24.696000 423794 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7799450Z [rank2]:E1204 12:03:24.696000 423794 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7799750Z [rank2]:E1204 12:03:24.696000 423794 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7799883Z [rank2]:E1204 12:03:24.696000 423794 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7800155Z [rank2]:E1204 12:03:24.696000 423794 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7800300Z [rank2]:E1204 12:03:24.696000 423794 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7800573Z [rank2]:E1204 12:03:24.696000 423794 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7800755Z [rank2]:E1204 12:03:24.696000 423794 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7801027Z [rank2]:E1204 12:03:24.696000 423794 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7801160Z [rank2]:E1204 12:03:24.696000 423794 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7801434Z [rank2]:E1204 12:03:24.696000 423794 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7801581Z [rank2]:E1204 12:03:24.696000 423794 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7802080Z [rank2]:E1204 12:03:24.696000 423794 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 2. CUDA driver allocated memory was 2300575744 and is now 3036676096. 2025-12-04T12:05:39.7802194Z [rank2]:E1204 12:03:24.696000 423794 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7802413Z [rank2]:E1204 12:03:24.696000 423794 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7802801Z [rank2]:E1204 12:03:24.696000 423794 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7802913Z [rank2]:E1204 12:03:24.696000 423794 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7803122Z [rank2]:E1204 12:03:24.696000 423794 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7803283Z [rank2]:E1204 12:03:24.696000 423794 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:05:39.7803320Z dist init r=2, world=4 2025-12-04T12:05:39.7803652Z [rank0]:[W1204 12:03:24.737772958 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:05:39.7803709Z FAILED [12.0266s] [100%] 2025-12-04T12:05:39.7803712Z 2025-12-04T12:05:39.7803767Z =================================== FAILURES =================================== 2025-12-04T12:05:39.7803899Z _ TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda _ 2025-12-04T12:05:39.7803960Z Traceback (most recent call last): 2025-12-04T12:05:39.7804121Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:05:39.7804164Z self._join_processes(fn) 2025-12-04T12:05:39.7804337Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:05:39.7804390Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:05:39.7804566Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:05:39.7804609Z raise RuntimeError(error) 2025-12-04T12:05:39.7804689Z RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7804733Z Traceback (most recent call last): 2025-12-04T12:05:39.7804893Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7804935Z getattr(self, test_name)() 2025-12-04T12:05:39.7805092Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7805127Z fn() 2025-12-04T12:05:39.7805279Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7805319Z method(*args, **kwargs) 2025-12-04T12:05:39.7805467Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7805508Z method(*args, **kwargs) 2025-12-04T12:05:39.7805656Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7805694Z with policy(): 2025-12-04T12:05:39.7805843Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7805885Z raise RuntimeError(msg) 2025-12-04T12:05:39.7806283Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 0. CUDA driver allocated memory was 2459959296 and is now 3196059648. 2025-12-04T12:05:39.7806286Z 2025-12-04T12:05:39.7806365Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7806630Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7806633Z 2025-12-04T12:05:39.7806724Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7806727Z 2025-12-04T12:05:39.7806729Z 2025-12-04T12:05:39.7806803Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:05:39.7806890Z Process 0 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:05:39.7807119Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-bb05bbb845985de5.xml - 2025-12-04T12:05:39.7807180Z =========================== short test summary info ============================ 2025-12-04T12:05:39.7807459Z FAILED [12.0266s] distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda - RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7807515Z Traceback (most recent call last): 2025-12-04T12:05:39.7807678Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7807732Z getattr(self, test_name)() 2025-12-04T12:05:39.7807890Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7807923Z fn() 2025-12-04T12:05:39.7808072Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7808113Z method(*args, **kwargs) 2025-12-04T12:05:39.7808262Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7808301Z method(*args, **kwargs) 2025-12-04T12:05:39.7808448Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7808485Z with policy(): 2025-12-04T12:05:39.7808634Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7808675Z raise RuntimeError(msg) 2025-12-04T12:05:39.7809054Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 0. CUDA driver allocated memory was 2459959296 and is now 3196059648. 2025-12-04T12:05:39.7809056Z 2025-12-04T12:05:39.7809132Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7809394Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7809398Z 2025-12-04T12:05:39.7809485Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7809549Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:05:39.7809614Z ======================= 1 failed, 9 deselected in 12.04s ======================= 2025-12-04T12:05:39.7809652Z Got exit code 1 2025-12-04T12:05:39.7809868Z FAILED CONSISTENTLY: test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda 2025-12-04T12:05:39.7810014Z Test failed consistently, continuing with the rest of the tests due to continue-through-error being set 2025-12-04T12:05:39.7810202Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-03a8eceb8ff2f417.xml 2025-12-04T12:05:39.7810259Z ============================= test session starts ============================== 2025-12-04T12:05:39.7810375Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:05:39.7810417Z cachedir: .pytest_cache 2025-12-04T12:05:39.7810574Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:05:39.7810659Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:05:39.7810700Z configfile: pytest.ini 2025-12-04T12:05:39.7810860Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:05:39.7810931Z collecting ... collected 10 items / 7 deselected / 3 selected 2025-12-04T12:05:39.7810986Z stepcurrent: skipping 7 already run items. 2025-12-04T12:05:39.7811029Z Running 3 items in this shard 2025-12-04T12:05:39.7811031Z 2025-12-04T12:05:39.7811368Z distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda I1204 12:03:29.396000 424125 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 424194 2025-12-04T12:05:39.7811544Z I1204 12:03:29.397000 424125 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 424195 2025-12-04T12:05:39.7811710Z I1204 12:03:29.397000 424125 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 424196 2025-12-04T12:05:39.7811858Z I1204 12:03:29.398000 424125 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 424197 2025-12-04T12:05:39.7812356Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7812420Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7812897Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7812957Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7813434Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7813493Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7813969Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7814026Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7814195Z [rank0]:E1204 12:03:39.549000 424194 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7814355Z [rank0]:E1204 12:03:39.549000 424194 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7814645Z [rank0]:E1204 12:03:39.549000 424194 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7814797Z [rank0]:E1204 12:03:39.549000 424194 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7815080Z [rank0]:E1204 12:03:39.549000 424194 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7815203Z [rank0]:E1204 12:03:39.549000 424194 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7815478Z [rank0]:E1204 12:03:39.549000 424194 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7815636Z [rank0]:E1204 12:03:39.549000 424194 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7815907Z [rank0]:E1204 12:03:39.549000 424194 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7816065Z [rank0]:E1204 12:03:39.549000 424194 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7816337Z [rank0]:E1204 12:03:39.549000 424194 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7816472Z [rank0]:E1204 12:03:39.549000 424194 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7816749Z [rank0]:E1204 12:03:39.549000 424194 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7816898Z [rank0]:E1204 12:03:39.549000 424194 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7817404Z [rank0]:E1204 12:03:39.549000 424194 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 0. CUDA driver allocated memory was 2459959296 and is now 3196059648. 2025-12-04T12:05:39.7817517Z [rank0]:E1204 12:03:39.549000 424194 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7817710Z [rank0]:E1204 12:03:39.549000 424194 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7818099Z [rank0]:E1204 12:03:39.549000 424194 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7818212Z [rank0]:E1204 12:03:39.549000 424194 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7818438Z [rank0]:E1204 12:03:39.549000 424194 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7818599Z [rank0]:E1204 12:03:39.549000 424194 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:05:39.7818639Z dist init r=0, world=4 2025-12-04T12:05:39.7818776Z [rank2]:E1204 12:03:39.557000 424196 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7818935Z [rank2]:E1204 12:03:39.557000 424196 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7819219Z [rank2]:E1204 12:03:39.557000 424196 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7819371Z [rank2]:E1204 12:03:39.557000 424196 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7819653Z [rank2]:E1204 12:03:39.557000 424196 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7819786Z [rank2]:E1204 12:03:39.557000 424196 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7820058Z [rank2]:E1204 12:03:39.557000 424196 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7820216Z [rank2]:E1204 12:03:39.557000 424196 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7820490Z [rank2]:E1204 12:03:39.557000 424196 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7820675Z [rank2]:E1204 12:03:39.557000 424196 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7820950Z [rank2]:E1204 12:03:39.557000 424196 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7821086Z [rank2]:E1204 12:03:39.557000 424196 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7821362Z [rank2]:E1204 12:03:39.557000 424196 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7821509Z [rank2]:E1204 12:03:39.557000 424196 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7822011Z [rank2]:E1204 12:03:39.557000 424196 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 2. CUDA driver allocated memory was 2300575744 and is now 3036676096. 2025-12-04T12:05:39.7822125Z [rank2]:E1204 12:03:39.557000 424196 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7822319Z [rank2]:E1204 12:03:39.557000 424196 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7822736Z [rank2]:E1204 12:03:39.557000 424196 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7822848Z [rank2]:E1204 12:03:39.557000 424196 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7823055Z [rank2]:E1204 12:03:39.557000 424196 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7823217Z [rank2]:E1204 12:03:39.557000 424196 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:05:39.7823255Z dist init r=2, world=4 2025-12-04T12:05:39.7823391Z [rank1]:E1204 12:03:39.563000 424195 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7823547Z [rank1]:E1204 12:03:39.563000 424195 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7823833Z [rank1]:E1204 12:03:39.563000 424195 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7823984Z [rank1]:E1204 12:03:39.563000 424195 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7824278Z [rank1]:E1204 12:03:39.563000 424195 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7824415Z [rank1]:E1204 12:03:39.563000 424195 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7824687Z [rank1]:E1204 12:03:39.563000 424195 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7824834Z [rank1]:E1204 12:03:39.563000 424195 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7825106Z [rank1]:E1204 12:03:39.563000 424195 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7825252Z [rank1]:E1204 12:03:39.563000 424195 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7825525Z [rank1]:E1204 12:03:39.563000 424195 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7825658Z [rank1]:E1204 12:03:39.563000 424195 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7825937Z [rank1]:E1204 12:03:39.563000 424195 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7826082Z [rank1]:E1204 12:03:39.563000 424195 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7826584Z [rank1]:E1204 12:03:39.563000 424195 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 1. CUDA driver allocated memory was 2317352960 and is now 3053453312. 2025-12-04T12:05:39.7826698Z [rank1]:E1204 12:03:39.563000 424195 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7826915Z [rank1]:E1204 12:03:39.563000 424195 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7827303Z [rank1]:E1204 12:03:39.563000 424195 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7827414Z [rank1]:E1204 12:03:39.563000 424195 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7827623Z [rank1]:E1204 12:03:39.563000 424195 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7827783Z [rank1]:E1204 12:03:39.563000 424195 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:05:39.7827821Z dist init r=1, world=4 2025-12-04T12:05:39.7827957Z [rank3]:E1204 12:03:39.572000 424197 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7828118Z [rank3]:E1204 12:03:39.572000 424197 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7828413Z [rank3]:E1204 12:03:39.572000 424197 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7828575Z [rank3]:E1204 12:03:39.572000 424197 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7828857Z [rank3]:E1204 12:03:39.572000 424197 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7828979Z [rank3]:E1204 12:03:39.572000 424197 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7829251Z [rank3]:E1204 12:03:39.572000 424197 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7829396Z [rank3]:E1204 12:03:39.572000 424197 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7829670Z [rank3]:E1204 12:03:39.572000 424197 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7829814Z [rank3]:E1204 12:03:39.572000 424197 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7830088Z [rank3]:E1204 12:03:39.572000 424197 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7830222Z [rank3]:E1204 12:03:39.572000 424197 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7830498Z [rank3]:E1204 12:03:39.572000 424197 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7830678Z [rank3]:E1204 12:03:39.572000 424197 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7831238Z [rank3]:E1204 12:03:39.572000 424197 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 3. CUDA driver allocated memory was 2250244096 and is now 2986344448. 2025-12-04T12:05:39.7831351Z [rank3]:E1204 12:03:39.572000 424197 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7831544Z [rank3]:E1204 12:03:39.572000 424197 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7831932Z [rank3]:E1204 12:03:39.572000 424197 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7832044Z [rank3]:E1204 12:03:39.572000 424197 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7832252Z [rank3]:E1204 12:03:39.572000 424197 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7832414Z [rank3]:E1204 12:03:39.572000 424197 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:05:39.7832467Z dist init r=3, world=4 2025-12-04T12:05:39.7832801Z [rank0]:[W1204 12:03:39.583769971 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:05:39.7832856Z FAILED [12.1266s] [ 33%] 2025-12-04T12:05:39.7832858Z 2025-12-04T12:05:39.7832918Z =================================== FAILURES =================================== 2025-12-04T12:05:39.7833048Z _ TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda _ 2025-12-04T12:05:39.7833099Z Traceback (most recent call last): 2025-12-04T12:05:39.7833260Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:05:39.7833304Z self._join_processes(fn) 2025-12-04T12:05:39.7833474Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:05:39.7833529Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:05:39.7833704Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:05:39.7833748Z raise RuntimeError(error) 2025-12-04T12:05:39.7833826Z RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7833871Z Traceback (most recent call last): 2025-12-04T12:05:39.7834029Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7834073Z getattr(self, test_name)() 2025-12-04T12:05:39.7834227Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7834261Z fn() 2025-12-04T12:05:39.7834410Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7834451Z method(*args, **kwargs) 2025-12-04T12:05:39.7834599Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7834639Z method(*args, **kwargs) 2025-12-04T12:05:39.7834788Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7834824Z with policy(): 2025-12-04T12:05:39.7834995Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7835037Z raise RuntimeError(msg) 2025-12-04T12:05:39.7835417Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 0. CUDA driver allocated memory was 2459959296 and is now 3196059648. 2025-12-04T12:05:39.7835420Z 2025-12-04T12:05:39.7835495Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7835759Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7835762Z 2025-12-04T12:05:39.7835848Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7835851Z 2025-12-04T12:05:39.7835853Z 2025-12-04T12:05:39.7835928Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:05:39.7836015Z Process 0 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:05:39.7836244Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-03a8eceb8ff2f417.xml - 2025-12-04T12:05:39.7836316Z =========================== short test summary info ============================ 2025-12-04T12:05:39.7836595Z FAILED [12.1266s] distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda - RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7836653Z Traceback (most recent call last): 2025-12-04T12:05:39.7836817Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7836860Z getattr(self, test_name)() 2025-12-04T12:05:39.7837018Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7837052Z fn() 2025-12-04T12:05:39.7837200Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7837243Z method(*args, **kwargs) 2025-12-04T12:05:39.7837392Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7837433Z method(*args, **kwargs) 2025-12-04T12:05:39.7837581Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7837619Z with policy(): 2025-12-04T12:05:39.7837767Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7837808Z raise RuntimeError(msg) 2025-12-04T12:05:39.7838185Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 0. CUDA driver allocated memory was 2459959296 and is now 3196059648. 2025-12-04T12:05:39.7838188Z 2025-12-04T12:05:39.7838262Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7838526Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7838529Z 2025-12-04T12:05:39.7838615Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7838679Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:05:39.7838758Z ======================= 1 failed, 7 deselected in 12.15s ======================= 2025-12-04T12:05:39.7838796Z Got exit code 1 2025-12-04T12:05:39.7838835Z Retrying single test... 2025-12-04T12:05:39.7839018Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-79032572997510cc.xml 2025-12-04T12:05:39.7839076Z ============================= test session starts ============================== 2025-12-04T12:05:39.7839189Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:05:39.7839231Z cachedir: .pytest_cache 2025-12-04T12:05:39.7839386Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:05:39.7839431Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:05:39.7839471Z configfile: pytest.ini 2025-12-04T12:05:39.7839630Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:05:39.7839703Z collecting ... collected 10 items / 9 deselected / 1 selected 2025-12-04T12:05:39.7839962Z stepcurrent: skipping 7 already run items. Running only test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7840016Z Running 1 items in this shard 2025-12-04T12:05:39.7840018Z 2025-12-04T12:05:39.7840355Z distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda I1204 12:03:44.193000 424527 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 424596 2025-12-04T12:05:39.7840519Z I1204 12:03:44.194000 424527 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 424597 2025-12-04T12:05:39.7840732Z I1204 12:03:44.194000 424527 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 424598 2025-12-04T12:05:39.7840882Z I1204 12:03:44.195000 424527 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 424599 2025-12-04T12:05:39.7841372Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7841436Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7841920Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7841980Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7842456Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7842515Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7843024Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7843080Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7843222Z [rank1]:E1204 12:03:54.194000 424597 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7843381Z [rank1]:E1204 12:03:54.194000 424597 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7843667Z [rank1]:E1204 12:03:54.194000 424597 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7843820Z [rank1]:E1204 12:03:54.194000 424597 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7844105Z [rank1]:E1204 12:03:54.194000 424597 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7844227Z [rank1]:E1204 12:03:54.194000 424597 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7844518Z [rank1]:E1204 12:03:54.194000 424597 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7844678Z [rank1]:E1204 12:03:54.194000 424597 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7844950Z [rank1]:E1204 12:03:54.194000 424597 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7845097Z [rank1]:E1204 12:03:54.194000 424597 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7845368Z [rank1]:E1204 12:03:54.194000 424597 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7845503Z [rank1]:E1204 12:03:54.194000 424597 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7845775Z [rank1]:E1204 12:03:54.194000 424597 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7845924Z [rank1]:E1204 12:03:54.194000 424597 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7846428Z [rank1]:E1204 12:03:54.194000 424597 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 1. CUDA driver allocated memory was 2317352960 and is now 3053453312. 2025-12-04T12:05:39.7846543Z [rank1]:E1204 12:03:54.194000 424597 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7846738Z [rank1]:E1204 12:03:54.194000 424597 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7847127Z [rank1]:E1204 12:03:54.194000 424597 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7847259Z [rank1]:E1204 12:03:54.194000 424597 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7847468Z [rank1]:E1204 12:03:54.194000 424597 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7847629Z [rank1]:E1204 12:03:54.194000 424597 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:05:39.7847668Z dist init r=1, world=4 2025-12-04T12:05:39.7847803Z [rank0]:E1204 12:03:54.266000 424596 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7847961Z [rank0]:E1204 12:03:54.266000 424596 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7848245Z [rank0]:E1204 12:03:54.266000 424596 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7848396Z [rank0]:E1204 12:03:54.266000 424596 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7848675Z [rank0]:E1204 12:03:54.266000 424596 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7848809Z [rank0]:E1204 12:03:54.266000 424596 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7849097Z [rank0]:E1204 12:03:54.266000 424596 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7849243Z [rank0]:E1204 12:03:54.266000 424596 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7849514Z [rank0]:E1204 12:03:54.266000 424596 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7849659Z [rank0]:E1204 12:03:54.266000 424596 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7849932Z [rank0]:E1204 12:03:54.266000 424596 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7850068Z [rank0]:E1204 12:03:54.266000 424596 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7850343Z [rank0]:E1204 12:03:54.266000 424596 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7850491Z [rank0]:E1204 12:03:54.266000 424596 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7851030Z [rank0]:E1204 12:03:54.266000 424596 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 0. CUDA driver allocated memory was 2459959296 and is now 3196059648. 2025-12-04T12:05:39.7851147Z [rank0]:E1204 12:03:54.266000 424596 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7851347Z [rank0]:E1204 12:03:54.266000 424596 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7851771Z [rank0]:E1204 12:03:54.266000 424596 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7851883Z [rank0]:E1204 12:03:54.266000 424596 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7852092Z [rank0]:E1204 12:03:54.266000 424596 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7852254Z [rank0]:E1204 12:03:54.266000 424596 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:05:39.7852291Z dist init r=0, world=4 2025-12-04T12:05:39.7852438Z [rank2]:E1204 12:03:54.267000 424598 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7852593Z [rank2]:E1204 12:03:54.267000 424598 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7852875Z [rank2]:E1204 12:03:54.267000 424598 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7853042Z [rank2]:E1204 12:03:54.267000 424598 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7853338Z [rank2]:E1204 12:03:54.267000 424598 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7853460Z [rank2]:E1204 12:03:54.267000 424598 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7853738Z [rank2]:E1204 12:03:54.267000 424598 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7853886Z [rank2]:E1204 12:03:54.267000 424598 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7854157Z [rank2]:E1204 12:03:54.267000 424598 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7854305Z [rank2]:E1204 12:03:54.267000 424598 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7854576Z [rank2]:E1204 12:03:54.267000 424598 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7854712Z [rank2]:E1204 12:03:54.267000 424598 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7854984Z [rank2]:E1204 12:03:54.267000 424598 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7855133Z [rank2]:E1204 12:03:54.267000 424598 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7855653Z [rank2]:E1204 12:03:54.267000 424598 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 2. CUDA driver allocated memory was 2300575744 and is now 3036676096. 2025-12-04T12:05:39.7855766Z [rank2]:E1204 12:03:54.267000 424598 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7855961Z [rank2]:E1204 12:03:54.267000 424598 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7856350Z [rank2]:E1204 12:03:54.267000 424598 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7856463Z [rank2]:E1204 12:03:54.267000 424598 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7856670Z [rank2]:E1204 12:03:54.267000 424598 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7856833Z [rank2]:E1204 12:03:54.267000 424598 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:05:39.7856875Z dist init r=2, world=4 2025-12-04T12:05:39.7857010Z [rank3]:E1204 12:03:54.288000 424599 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7857179Z [rank3]:E1204 12:03:54.288000 424599 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7857483Z [rank3]:E1204 12:03:54.288000 424599 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7857633Z [rank3]:E1204 12:03:54.288000 424599 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7857915Z [rank3]:E1204 12:03:54.288000 424599 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7858036Z [rank3]:E1204 12:03:54.288000 424599 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7858312Z [rank3]:E1204 12:03:54.288000 424599 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7858459Z [rank3]:E1204 12:03:54.288000 424599 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7858730Z [rank3]:E1204 12:03:54.288000 424599 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7858875Z [rank3]:E1204 12:03:54.288000 424599 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7859145Z [rank3]:E1204 12:03:54.288000 424599 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7859279Z [rank3]:E1204 12:03:54.288000 424599 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7859553Z [rank3]:E1204 12:03:54.288000 424599 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7859697Z [rank3]:E1204 12:03:54.288000 424599 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7860216Z [rank3]:E1204 12:03:54.288000 424599 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 3. CUDA driver allocated memory was 2250244096 and is now 2986344448. 2025-12-04T12:05:39.7860331Z [rank3]:E1204 12:03:54.288000 424599 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7860525Z [rank3]:E1204 12:03:54.288000 424599 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7860952Z [rank3]:E1204 12:03:54.288000 424599 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7861062Z [rank3]:E1204 12:03:54.288000 424599 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7861269Z [rank3]:E1204 12:03:54.288000 424599 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7861444Z [rank3]:E1204 12:03:54.288000 424599 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:05:39.7861493Z dist init r=3, world=4 2025-12-04T12:05:39.7861825Z [rank0]:[W1204 12:03:54.386070057 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:05:39.7861865Z FAILED [12.1255s] [100%] 2025-12-04T12:05:39.7861867Z 2025-12-04T12:05:39.7861925Z =================================== FAILURES =================================== 2025-12-04T12:05:39.7862055Z _ TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda _ 2025-12-04T12:05:39.7862101Z Traceback (most recent call last): 2025-12-04T12:05:39.7862261Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:05:39.7862304Z self._join_processes(fn) 2025-12-04T12:05:39.7862474Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:05:39.7862528Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:05:39.7862704Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:05:39.7862748Z raise RuntimeError(error) 2025-12-04T12:05:39.7862826Z RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7862872Z Traceback (most recent call last): 2025-12-04T12:05:39.7863029Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7863072Z getattr(self, test_name)() 2025-12-04T12:05:39.7863226Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7863260Z fn() 2025-12-04T12:05:39.7863410Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7863450Z method(*args, **kwargs) 2025-12-04T12:05:39.7863598Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7863637Z method(*args, **kwargs) 2025-12-04T12:05:39.7863814Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7863851Z with policy(): 2025-12-04T12:05:39.7864002Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7864043Z raise RuntimeError(msg) 2025-12-04T12:05:39.7864422Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 0. CUDA driver allocated memory was 2459959296 and is now 3196059648. 2025-12-04T12:05:39.7864425Z 2025-12-04T12:05:39.7864499Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7864766Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7864770Z 2025-12-04T12:05:39.7864857Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7864860Z 2025-12-04T12:05:39.7864918Z Process 2 exited with error code 10 and exception: 2025-12-04T12:05:39.7864980Z Traceback (most recent call last): 2025-12-04T12:05:39.7865141Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7865183Z getattr(self, test_name)() 2025-12-04T12:05:39.7865352Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7865385Z fn() 2025-12-04T12:05:39.7865534Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7865574Z method(*args, **kwargs) 2025-12-04T12:05:39.7865723Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7865765Z method(*args, **kwargs) 2025-12-04T12:05:39.7865911Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7865951Z with policy(): 2025-12-04T12:05:39.7866100Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7866140Z raise RuntimeError(msg) 2025-12-04T12:05:39.7866515Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 2. CUDA driver allocated memory was 2300575744 and is now 3036676096. 2025-12-04T12:05:39.7866517Z 2025-12-04T12:05:39.7866592Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7866854Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7866858Z 2025-12-04T12:05:39.7866945Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7866947Z 2025-12-04T12:05:39.7866949Z 2025-12-04T12:05:39.7867024Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:05:39.7867110Z Process 0 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:05:39.7867339Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-79032572997510cc.xml - 2025-12-04T12:05:39.7867398Z =========================== short test summary info ============================ 2025-12-04T12:05:39.7867696Z FAILED [12.1255s] distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda - RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7867741Z Traceback (most recent call last): 2025-12-04T12:05:39.7867902Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7867944Z getattr(self, test_name)() 2025-12-04T12:05:39.7868101Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7868135Z fn() 2025-12-04T12:05:39.7868285Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7868324Z method(*args, **kwargs) 2025-12-04T12:05:39.7868472Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7868513Z method(*args, **kwargs) 2025-12-04T12:05:39.7868659Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7868696Z with policy(): 2025-12-04T12:05:39.7868844Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7868895Z raise RuntimeError(msg) 2025-12-04T12:05:39.7869270Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 0. CUDA driver allocated memory was 2459959296 and is now 3196059648. 2025-12-04T12:05:39.7869284Z 2025-12-04T12:05:39.7869359Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7869627Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7869629Z 2025-12-04T12:05:39.7869716Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7869720Z 2025-12-04T12:05:39.7869777Z Process 2 exited with error code 10 and exception: 2025-12-04T12:05:39.7869821Z Traceback (most recent call last): 2025-12-04T12:05:39.7869982Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7870023Z getattr(self, test_name)() 2025-12-04T12:05:39.7870180Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7870213Z fn() 2025-12-04T12:05:39.7870361Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7870402Z method(*args, **kwargs) 2025-12-04T12:05:39.7870552Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7870591Z method(*args, **kwargs) 2025-12-04T12:05:39.7870774Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7870809Z with policy(): 2025-12-04T12:05:39.7870958Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7870999Z raise RuntimeError(msg) 2025-12-04T12:05:39.7871377Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 2. CUDA driver allocated memory was 2300575744 and is now 3036676096. 2025-12-04T12:05:39.7871409Z 2025-12-04T12:05:39.7871482Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7871748Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7871751Z 2025-12-04T12:05:39.7871837Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7871901Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:05:39.7871963Z ======================= 1 failed, 9 deselected in 12.14s ======================= 2025-12-04T12:05:39.7871999Z Got exit code 1 2025-12-04T12:05:39.7872039Z Retrying single test... 2025-12-04T12:05:39.7872223Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-465fc81b312e44a4.xml 2025-12-04T12:05:39.7872282Z ============================= test session starts ============================== 2025-12-04T12:05:39.7872393Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:05:39.7872434Z cachedir: .pytest_cache 2025-12-04T12:05:39.7872613Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:05:39.7872659Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:05:39.7872699Z configfile: pytest.ini 2025-12-04T12:05:39.7872875Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:05:39.7872948Z collecting ... collected 10 items / 9 deselected / 1 selected 2025-12-04T12:05:39.7873205Z stepcurrent: skipping 7 already run items. Running only test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7873249Z Running 1 items in this shard 2025-12-04T12:05:39.7873251Z 2025-12-04T12:05:39.7873586Z distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda I1204 12:03:58.887000 424929 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 424998 2025-12-04T12:05:39.7873740Z I1204 12:03:58.888000 424929 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 424999 2025-12-04T12:05:39.7873892Z I1204 12:03:58.889000 424929 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 425000 2025-12-04T12:05:39.7874041Z I1204 12:03:58.889000 424929 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 425001 2025-12-04T12:05:39.7874531Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7874593Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7875075Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7875135Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7875634Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7875691Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7876169Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7876225Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7876368Z [rank3]:E1204 12:04:08.614000 425001 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7876528Z [rank3]:E1204 12:04:08.614000 425001 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7876814Z [rank3]:E1204 12:04:08.614000 425001 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7876978Z [rank3]:E1204 12:04:08.614000 425001 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7877273Z [rank3]:E1204 12:04:08.614000 425001 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7877396Z [rank3]:E1204 12:04:08.614000 425001 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7877670Z [rank3]:E1204 12:04:08.614000 425001 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7877816Z [rank3]:E1204 12:04:08.614000 425001 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7878088Z [rank3]:E1204 12:04:08.614000 425001 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7878236Z [rank3]:E1204 12:04:08.614000 425001 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7878509Z [rank3]:E1204 12:04:08.614000 425001 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7878646Z [rank3]:E1204 12:04:08.614000 425001 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7878921Z [rank3]:E1204 12:04:08.614000 425001 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7879067Z [rank3]:E1204 12:04:08.614000 425001 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7879572Z [rank3]:E1204 12:04:08.614000 425001 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 3. CUDA driver allocated memory was 2243952640 and is now 2986344448. 2025-12-04T12:05:39.7879708Z [rank3]:E1204 12:04:08.614000 425001 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7879900Z [rank3]:E1204 12:04:08.614000 425001 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7880290Z [rank3]:E1204 12:04:08.614000 425001 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7880403Z [rank3]:E1204 12:04:08.614000 425001 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7880641Z [rank3]:E1204 12:04:08.614000 425001 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7880804Z [rank3]:E1204 12:04:08.614000 425001 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:05:39.7880845Z dist init r=3, world=4 2025-12-04T12:05:39.7880982Z [rank0]:E1204 12:04:08.837000 424998 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7881156Z [rank0]:E1204 12:04:08.837000 424998 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7881439Z [rank0]:E1204 12:04:08.837000 424998 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7881605Z [rank0]:E1204 12:04:08.837000 424998 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7881887Z [rank0]:E1204 12:04:08.837000 424998 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7882008Z [rank0]:E1204 12:04:08.837000 424998 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7882282Z [rank0]:E1204 12:04:08.837000 424998 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7882428Z [rank0]:E1204 12:04:08.837000 424998 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7882700Z [rank0]:E1204 12:04:08.837000 424998 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7882846Z [rank0]:E1204 12:04:08.837000 424998 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7883117Z [rank0]:E1204 12:04:08.837000 424998 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7883252Z [rank0]:E1204 12:04:08.837000 424998 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7883526Z [rank0]:E1204 12:04:08.837000 424998 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7883671Z [rank0]:E1204 12:04:08.837000 424998 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7884198Z [rank0]:E1204 12:04:08.837000 424998 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 0. CUDA driver allocated memory was 2459959296 and is now 3196059648. 2025-12-04T12:05:39.7884313Z [rank0]:E1204 12:04:08.837000 424998 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7884505Z [rank0]:E1204 12:04:08.837000 424998 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7884893Z [rank0]:E1204 12:04:08.837000 424998 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7885006Z [rank0]:E1204 12:04:08.837000 424998 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7885214Z [rank0]:E1204 12:04:08.837000 424998 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7885387Z [rank0]:E1204 12:04:08.837000 424998 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:05:39.7885435Z dist init r=0, world=4 2025-12-04T12:05:39.7885571Z [rank1]:E1204 12:04:08.858000 424999 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7885727Z [rank1]:E1204 12:04:08.858000 424999 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7886012Z [rank1]:E1204 12:04:08.858000 424999 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7886163Z [rank1]:E1204 12:04:08.858000 424999 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7886444Z [rank1]:E1204 12:04:08.858000 424999 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7886567Z [rank1]:E1204 12:04:08.858000 424999 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7886838Z [rank1]:E1204 12:04:08.858000 424999 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7886986Z [rank1]:E1204 12:04:08.858000 424999 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7887256Z [rank1]:E1204 12:04:08.858000 424999 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7887402Z [rank1]:E1204 12:04:08.858000 424999 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7887675Z [rank1]:E1204 12:04:08.858000 424999 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7887810Z [rank1]:E1204 12:04:08.858000 424999 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7888109Z [rank1]:E1204 12:04:08.858000 424999 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7888255Z [rank1]:E1204 12:04:08.858000 424999 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7888757Z [rank1]:E1204 12:04:08.858000 424999 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 1. CUDA driver allocated memory was 2317352960 and is now 3053453312. 2025-12-04T12:05:39.7888870Z [rank1]:E1204 12:04:08.858000 424999 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7889064Z [rank1]:E1204 12:04:08.858000 424999 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7889452Z [rank1]:E1204 12:04:08.858000 424999 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7889573Z [rank1]:E1204 12:04:08.858000 424999 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7889781Z [rank1]:E1204 12:04:08.858000 424999 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7889954Z [rank1]:E1204 12:04:08.858000 424999 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:05:39.7889992Z dist init r=1, world=4 2025-12-04T12:05:39.7890129Z [rank2]:E1204 12:04:08.949000 425000 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7890286Z [rank2]:E1204 12:04:08.949000 425000 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7890568Z [rank2]:E1204 12:04:08.949000 425000 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7890787Z [rank2]:E1204 12:04:08.949000 425000 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7891069Z [rank2]:E1204 12:04:08.949000 425000 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7891189Z [rank2]:E1204 12:04:08.949000 425000 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7891463Z [rank2]:E1204 12:04:08.949000 425000 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7891609Z [rank2]:E1204 12:04:08.949000 425000 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7891880Z [rank2]:E1204 12:04:08.949000 425000 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7892025Z [rank2]:E1204 12:04:08.949000 425000 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7892321Z [rank2]:E1204 12:04:08.949000 425000 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7892456Z [rank2]:E1204 12:04:08.949000 425000 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7892828Z [rank2]:E1204 12:04:08.949000 425000 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7892976Z [rank2]:E1204 12:04:08.949000 425000 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7893477Z [rank2]:E1204 12:04:08.949000 425000 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 2. CUDA driver allocated memory was 2300575744 and is now 3036676096. 2025-12-04T12:05:39.7893589Z [rank2]:E1204 12:04:08.949000 425000 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7893783Z [rank2]:E1204 12:04:08.949000 425000 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7894185Z [rank2]:E1204 12:04:08.949000 425000 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7894311Z [rank2]:E1204 12:04:08.949000 425000 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7894519Z [rank2]:E1204 12:04:08.949000 425000 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7894681Z [rank2]:E1204 12:04:08.949000 425000 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:05:39.7894717Z dist init r=2, world=4 2025-12-04T12:05:39.7895054Z [rank0]:[W1204 12:04:09.883663082 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:05:39.7895094Z FAILED [11.6250s] [100%] 2025-12-04T12:05:39.7895096Z 2025-12-04T12:05:39.7895151Z =================================== FAILURES =================================== 2025-12-04T12:05:39.7895281Z _ TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda _ 2025-12-04T12:05:39.7895328Z Traceback (most recent call last): 2025-12-04T12:05:39.7895490Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:05:39.7895533Z self._join_processes(fn) 2025-12-04T12:05:39.7895704Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:05:39.7895758Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:05:39.7895933Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:05:39.7895977Z raise RuntimeError(error) 2025-12-04T12:05:39.7896055Z RuntimeError: Process 3 exited with error code 10 and exception: 2025-12-04T12:05:39.7896098Z Traceback (most recent call last): 2025-12-04T12:05:39.7896256Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7896297Z getattr(self, test_name)() 2025-12-04T12:05:39.7896474Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7896507Z fn() 2025-12-04T12:05:39.7896656Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7896697Z method(*args, **kwargs) 2025-12-04T12:05:39.7896845Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7896885Z method(*args, **kwargs) 2025-12-04T12:05:39.7897035Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7897071Z with policy(): 2025-12-04T12:05:39.7897220Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7897261Z raise RuntimeError(msg) 2025-12-04T12:05:39.7897640Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 3. CUDA driver allocated memory was 2243952640 and is now 2986344448. 2025-12-04T12:05:39.7897652Z 2025-12-04T12:05:39.7897727Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7897991Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7898007Z 2025-12-04T12:05:39.7898094Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7898096Z 2025-12-04T12:05:39.7898098Z 2025-12-04T12:05:39.7898171Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:05:39.7898259Z Process 3 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:05:39.7898486Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-465fc81b312e44a4.xml - 2025-12-04T12:05:39.7898547Z =========================== short test summary info ============================ 2025-12-04T12:05:39.7898823Z FAILED [11.6250s] distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda - RuntimeError: Process 3 exited with error code 10 and exception: 2025-12-04T12:05:39.7898870Z Traceback (most recent call last): 2025-12-04T12:05:39.7899030Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7899071Z getattr(self, test_name)() 2025-12-04T12:05:39.7899229Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7899263Z fn() 2025-12-04T12:05:39.7899413Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7899451Z method(*args, **kwargs) 2025-12-04T12:05:39.7899602Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7899640Z method(*args, **kwargs) 2025-12-04T12:05:39.7899789Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7899825Z with policy(): 2025-12-04T12:05:39.7899975Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7900014Z raise RuntimeError(msg) 2025-12-04T12:05:39.7900411Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda! Caching allocator allocated memory was 512 and is now reported as 4608 on device 3. CUDA driver allocated memory was 2243952640 and is now 2986344448. 2025-12-04T12:05:39.7900413Z 2025-12-04T12:05:39.7900489Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7900795Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestCommunicationCUDA.test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7900798Z 2025-12-04T12:05:39.7900886Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7900948Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:05:39.7901009Z ======================= 1 failed, 9 deselected in 11.64s ======================= 2025-12-04T12:05:39.7901045Z Got exit code 1 2025-12-04T12:05:39.7901260Z FAILED CONSISTENTLY: test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda 2025-12-04T12:05:39.7901386Z Test failed consistently, continuing with the rest of the tests due to continue-through-error being set 2025-12-04T12:05:39.7901657Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-442fbdf990c2c97f.xml 2025-12-04T12:05:39.7901714Z ============================= test session starts ============================== 2025-12-04T12:05:39.7901843Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:05:39.7901883Z cachedir: .pytest_cache 2025-12-04T12:05:39.7902038Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:05:39.7902083Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:05:39.7902125Z configfile: pytest.ini 2025-12-04T12:05:39.7902283Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:05:39.7902355Z collecting ... collected 10 items / 8 deselected / 2 selected 2025-12-04T12:05:39.7902409Z stepcurrent: skipping 8 already run items. 2025-12-04T12:05:39.7902451Z Running 2 items in this shard 2025-12-04T12:05:39.7902453Z 2025-12-04T12:05:39.7902755Z distributed/fsdp/test_fsdp_comm.py::TestExplicitUnshardCUDA::test_unshard_async_use_orig_params_False_cuda I1204 12:04:13.278000 425331 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 425400 2025-12-04T12:05:39.7902908Z I1204 12:04:13.279000 425331 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 425401 2025-12-04T12:05:39.7903429Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7903491Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7903971Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7904030Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7905125Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py:865: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at /var/lib/jenkins/workspace/torch/csrc/autograd/autograd_not_implemented_fallback.cpp:76.) 2025-12-04T12:05:39.7905253Z return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2025-12-04T12:05:39.7906294Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py:865: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at /var/lib/jenkins/workspace/torch/csrc/autograd/autograd_not_implemented_fallback.cpp:76.) 2025-12-04T12:05:39.7906437Z return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2025-12-04T12:05:39.7906580Z [rank0]:E1204 12:04:22.976000 425400 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7906744Z [rank0]:E1204 12:04:22.976000 425400 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7907033Z [rank0]:E1204 12:04:22.976000 425400 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7907187Z [rank0]:E1204 12:04:22.976000 425400 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7907467Z [rank0]:E1204 12:04:22.976000 425400 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7907591Z [rank0]:E1204 12:04:22.976000 425400 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7907865Z [rank0]:E1204 12:04:22.976000 425400 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7908012Z [rank0]:E1204 12:04:22.976000 425400 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7908284Z [rank0]:E1204 12:04:22.976000 425400 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7908430Z [rank0]:E1204 12:04:22.976000 425400 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7908701Z [rank0]:E1204 12:04:22.976000 425400 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7908858Z [rank0]:E1204 12:04:22.976000 425400 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7909134Z [rank0]:E1204 12:04:22.976000 425400 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7909281Z [rank0]:E1204 12:04:22.976000 425400 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7909749Z [rank0]:E1204 12:04:22.976000 425400 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_False_cuda! Caching allocator allocated memory was 512 and is now reported as 13824 on device 0. CUDA driver allocated memory was 2019557376 and is now 3489660928. 2025-12-04T12:05:39.7909865Z [rank0]:E1204 12:04:22.976000 425400 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7910059Z [rank0]:E1204 12:04:22.976000 425400 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7910421Z [rank0]:E1204 12:04:22.976000 425400 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_False_cuda 2025-12-04T12:05:39.7910554Z [rank0]:E1204 12:04:22.976000 425400 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7910797Z [rank0]:E1204 12:04:22.976000 425400 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7910958Z [rank0]:E1204 12:04:22.976000 425400 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:05:39.7910999Z dist init r=0, world=2 2025-12-04T12:05:39.7911136Z [rank1]:E1204 12:04:23.028000 425401 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7911296Z [rank1]:E1204 12:04:23.028000 425401 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7911580Z [rank1]:E1204 12:04:23.028000 425401 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7911734Z [rank1]:E1204 12:04:23.028000 425401 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7912017Z [rank1]:E1204 12:04:23.028000 425401 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7912139Z [rank1]:E1204 12:04:23.028000 425401 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7912413Z [rank1]:E1204 12:04:23.028000 425401 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7912559Z [rank1]:E1204 12:04:23.028000 425401 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7912832Z [rank1]:E1204 12:04:23.028000 425401 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7912976Z [rank1]:E1204 12:04:23.028000 425401 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7913274Z [rank1]:E1204 12:04:23.028000 425401 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7913408Z [rank1]:E1204 12:04:23.028000 425401 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7913686Z [rank1]:E1204 12:04:23.028000 425401 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7913834Z [rank1]:E1204 12:04:23.028000 425401 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7914302Z [rank1]:E1204 12:04:23.028000 425401 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_False_cuda! Caching allocator allocated memory was 512 and is now reported as 13824 on device 1. CUDA driver allocated memory was 1864368128 and is now 3334471680. 2025-12-04T12:05:39.7914416Z [rank1]:E1204 12:04:23.028000 425401 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7914621Z [rank1]:E1204 12:04:23.028000 425401 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7914986Z [rank1]:E1204 12:04:23.028000 425401 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_False_cuda 2025-12-04T12:05:39.7915098Z [rank1]:E1204 12:04:23.028000 425401 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7915306Z [rank1]:E1204 12:04:23.028000 425401 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7915468Z [rank1]:E1204 12:04:23.028000 425401 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:05:39.7915507Z dist init r=1, world=2 2025-12-04T12:05:39.7915838Z [rank0]:[W1204 12:04:23.017840934 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:05:39.7915879Z FAILED [11.6225s] [ 50%] 2025-12-04T12:05:39.7915881Z 2025-12-04T12:05:39.7915937Z =================================== FAILURES =================================== 2025-12-04T12:05:39.7916035Z ____ TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_False_cuda _____ 2025-12-04T12:05:39.7916082Z Traceback (most recent call last): 2025-12-04T12:05:39.7916241Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:05:39.7916285Z self._join_processes(fn) 2025-12-04T12:05:39.7916454Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:05:39.7916509Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:05:39.7916685Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:05:39.7916728Z raise RuntimeError(error) 2025-12-04T12:05:39.7916808Z RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7916851Z Traceback (most recent call last): 2025-12-04T12:05:39.7917011Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7917071Z getattr(self, test_name)() 2025-12-04T12:05:39.7917227Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7917262Z fn() 2025-12-04T12:05:39.7917412Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7917452Z method(*args, **kwargs) 2025-12-04T12:05:39.7917601Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7917642Z method(*args, **kwargs) 2025-12-04T12:05:39.7917790Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7917826Z with policy(): 2025-12-04T12:05:39.7917979Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7918019Z raise RuntimeError(msg) 2025-12-04T12:05:39.7918366Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_False_cuda! Caching allocator allocated memory was 512 and is now reported as 13824 on device 0. CUDA driver allocated memory was 2019557376 and is now 3489660928. 2025-12-04T12:05:39.7918380Z 2025-12-04T12:05:39.7918454Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7918700Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_False_cuda 2025-12-04T12:05:39.7918702Z 2025-12-04T12:05:39.7918790Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7918792Z 2025-12-04T12:05:39.7918794Z 2025-12-04T12:05:39.7918868Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:05:39.7918955Z Process 0 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:05:39.7919183Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-442fbdf990c2c97f.xml - 2025-12-04T12:05:39.7919245Z =========================== short test summary info ============================ 2025-12-04T12:05:39.7919488Z FAILED [11.6225s] distributed/fsdp/test_fsdp_comm.py::TestExplicitUnshardCUDA::test_unshard_async_use_orig_params_False_cuda - RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7919535Z Traceback (most recent call last): 2025-12-04T12:05:39.7919696Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7919741Z getattr(self, test_name)() 2025-12-04T12:05:39.7919900Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7919936Z fn() 2025-12-04T12:05:39.7920084Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7920127Z method(*args, **kwargs) 2025-12-04T12:05:39.7920276Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7920316Z method(*args, **kwargs) 2025-12-04T12:05:39.7920466Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7920502Z with policy(): 2025-12-04T12:05:39.7920692Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7920732Z raise RuntimeError(msg) 2025-12-04T12:05:39.7921105Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_False_cuda! Caching allocator allocated memory was 512 and is now reported as 13824 on device 0. CUDA driver allocated memory was 2019557376 and is now 3489660928. 2025-12-04T12:05:39.7921108Z 2025-12-04T12:05:39.7921181Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7921407Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_False_cuda 2025-12-04T12:05:39.7921410Z 2025-12-04T12:05:39.7921495Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7921558Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:05:39.7921618Z ======================= 1 failed, 8 deselected in 11.64s ======================= 2025-12-04T12:05:39.7921655Z Got exit code 1 2025-12-04T12:05:39.7921694Z Retrying single test... 2025-12-04T12:05:39.7921881Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-077e30fa9db58a23.xml 2025-12-04T12:05:39.7921937Z ============================= test session starts ============================== 2025-12-04T12:05:39.7922063Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:05:39.7922104Z cachedir: .pytest_cache 2025-12-04T12:05:39.7922258Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:05:39.7922318Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:05:39.7922358Z configfile: pytest.ini 2025-12-04T12:05:39.7922518Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:05:39.7922590Z collecting ... collected 10 items / 9 deselected / 1 selected 2025-12-04T12:05:39.7922810Z stepcurrent: skipping 8 already run items. Running only test/distributed/fsdp/test_fsdp_comm.py::TestExplicitUnshardCUDA::test_unshard_async_use_orig_params_False_cuda 2025-12-04T12:05:39.7922852Z Running 1 items in this shard 2025-12-04T12:05:39.7922854Z 2025-12-04T12:05:39.7923152Z distributed/fsdp/test_fsdp_comm.py::TestExplicitUnshardCUDA::test_unshard_async_use_orig_params_False_cuda I1204 12:04:27.453000 425567 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 425636 2025-12-04T12:05:39.7923304Z I1204 12:04:27.453000 425567 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 425637 2025-12-04T12:05:39.7923792Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7923853Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7924329Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7924390Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7925460Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py:865: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at /var/lib/jenkins/workspace/torch/csrc/autograd/autograd_not_implemented_fallback.cpp:76.) 2025-12-04T12:05:39.7925585Z return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2025-12-04T12:05:39.7926627Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py:865: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at /var/lib/jenkins/workspace/torch/csrc/autograd/autograd_not_implemented_fallback.cpp:76.) 2025-12-04T12:05:39.7926759Z return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2025-12-04T12:05:39.7926911Z [rank1]:E1204 12:04:37.328000 425637 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7927071Z [rank1]:E1204 12:04:37.328000 425637 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7927361Z [rank1]:E1204 12:04:37.328000 425637 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7927517Z [rank1]:E1204 12:04:37.328000 425637 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7927801Z [rank1]:E1204 12:04:37.328000 425637 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7927925Z [rank1]:E1204 12:04:37.328000 425637 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7928199Z [rank1]:E1204 12:04:37.328000 425637 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7928347Z [rank1]:E1204 12:04:37.328000 425637 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7928622Z [rank1]:E1204 12:04:37.328000 425637 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7928767Z [rank1]:E1204 12:04:37.328000 425637 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7929040Z [rank1]:E1204 12:04:37.328000 425637 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7929175Z [rank1]:E1204 12:04:37.328000 425637 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7929470Z [rank1]:E1204 12:04:37.328000 425637 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7929615Z [rank1]:E1204 12:04:37.328000 425637 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7930086Z [rank1]:E1204 12:04:37.328000 425637 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_False_cuda! Caching allocator allocated memory was 512 and is now reported as 13824 on device 1. CUDA driver allocated memory was 1864368128 and is now 3334471680. 2025-12-04T12:05:39.7930200Z [rank1]:E1204 12:04:37.328000 425637 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7930392Z [rank1]:E1204 12:04:37.328000 425637 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7930805Z [rank1]:E1204 12:04:37.328000 425637 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_False_cuda 2025-12-04T12:05:39.7930933Z [rank1]:E1204 12:04:37.328000 425637 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7931141Z [rank1]:E1204 12:04:37.328000 425637 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7931317Z [rank1]:E1204 12:04:37.328000 425637 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:05:39.7931356Z dist init r=1, world=2 2025-12-04T12:05:39.7931492Z [rank0]:E1204 12:04:37.332000 425636 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7931649Z [rank0]:E1204 12:04:37.332000 425636 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7931934Z [rank0]:E1204 12:04:37.332000 425636 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7932087Z [rank0]:E1204 12:04:37.332000 425636 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7932370Z [rank0]:E1204 12:04:37.332000 425636 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7932491Z [rank0]:E1204 12:04:37.332000 425636 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7932765Z [rank0]:E1204 12:04:37.332000 425636 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7932910Z [rank0]:E1204 12:04:37.332000 425636 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7933205Z [rank0]:E1204 12:04:37.332000 425636 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7933352Z [rank0]:E1204 12:04:37.332000 425636 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7933646Z [rank0]:E1204 12:04:37.332000 425636 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7933780Z [rank0]:E1204 12:04:37.332000 425636 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7934052Z [rank0]:E1204 12:04:37.332000 425636 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7934199Z [rank0]:E1204 12:04:37.332000 425636 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7934669Z [rank0]:E1204 12:04:37.332000 425636 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_False_cuda! Caching allocator allocated memory was 512 and is now reported as 13824 on device 0. CUDA driver allocated memory was 2019557376 and is now 3489660928. 2025-12-04T12:05:39.7934783Z [rank0]:E1204 12:04:37.332000 425636 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7934977Z [rank0]:E1204 12:04:37.332000 425636 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7935342Z [rank0]:E1204 12:04:37.332000 425636 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_False_cuda 2025-12-04T12:05:39.7935467Z [rank0]:E1204 12:04:37.332000 425636 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7935673Z [rank0]:E1204 12:04:37.332000 425636 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7935836Z [rank0]:E1204 12:04:37.332000 425636 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:05:39.7935874Z dist init r=0, world=2 2025-12-04T12:05:39.7936205Z [rank0]:[W1204 12:04:37.393866854 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:05:39.7936246Z FAILED [11.8241s] [100%] 2025-12-04T12:05:39.7936249Z 2025-12-04T12:05:39.7936303Z =================================== FAILURES =================================== 2025-12-04T12:05:39.7936403Z ____ TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_False_cuda _____ 2025-12-04T12:05:39.7936447Z Traceback (most recent call last): 2025-12-04T12:05:39.7936608Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:05:39.7936652Z self._join_processes(fn) 2025-12-04T12:05:39.7936825Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:05:39.7936877Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:05:39.7937054Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:05:39.7937096Z raise RuntimeError(error) 2025-12-04T12:05:39.7937177Z RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7937220Z Traceback (most recent call last): 2025-12-04T12:05:39.7937379Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7937420Z getattr(self, test_name)() 2025-12-04T12:05:39.7937576Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7937630Z fn() 2025-12-04T12:05:39.7937781Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7937821Z method(*args, **kwargs) 2025-12-04T12:05:39.7937972Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7938012Z method(*args, **kwargs) 2025-12-04T12:05:39.7938160Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7938197Z with policy(): 2025-12-04T12:05:39.7938348Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7938390Z raise RuntimeError(msg) 2025-12-04T12:05:39.7938735Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_False_cuda! Caching allocator allocated memory was 512 and is now reported as 13824 on device 0. CUDA driver allocated memory was 2019557376 and is now 3489660928. 2025-12-04T12:05:39.7938737Z 2025-12-04T12:05:39.7938812Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7939049Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_False_cuda 2025-12-04T12:05:39.7939063Z 2025-12-04T12:05:39.7939151Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7939153Z 2025-12-04T12:05:39.7939154Z 2025-12-04T12:05:39.7939228Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:05:39.7939314Z Process 0 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:05:39.7939544Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-077e30fa9db58a23.xml - 2025-12-04T12:05:39.7939605Z =========================== short test summary info ============================ 2025-12-04T12:05:39.7939849Z FAILED [11.8241s] distributed/fsdp/test_fsdp_comm.py::TestExplicitUnshardCUDA::test_unshard_async_use_orig_params_False_cuda - RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7939894Z Traceback (most recent call last): 2025-12-04T12:05:39.7940056Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7940099Z getattr(self, test_name)() 2025-12-04T12:05:39.7940258Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7940292Z fn() 2025-12-04T12:05:39.7940453Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7940493Z method(*args, **kwargs) 2025-12-04T12:05:39.7940683Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7940722Z method(*args, **kwargs) 2025-12-04T12:05:39.7940874Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7940910Z with policy(): 2025-12-04T12:05:39.7941062Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7941102Z raise RuntimeError(msg) 2025-12-04T12:05:39.7941448Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_False_cuda! Caching allocator allocated memory was 512 and is now reported as 13824 on device 0. CUDA driver allocated memory was 2019557376 and is now 3489660928. 2025-12-04T12:05:39.7941480Z 2025-12-04T12:05:39.7941555Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7941779Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_False_cuda 2025-12-04T12:05:39.7941782Z 2025-12-04T12:05:39.7941871Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7941932Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:05:39.7941995Z ======================= 1 failed, 9 deselected in 11.84s ======================= 2025-12-04T12:05:39.7942031Z Got exit code 1 2025-12-04T12:05:39.7942071Z Retrying single test... 2025-12-04T12:05:39.7942254Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-5389250ff332253c.xml 2025-12-04T12:05:39.7942314Z ============================= test session starts ============================== 2025-12-04T12:05:39.7942423Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:05:39.7942464Z cachedir: .pytest_cache 2025-12-04T12:05:39.7942618Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:05:39.7942679Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:05:39.7942719Z configfile: pytest.ini 2025-12-04T12:05:39.7942892Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:05:39.7942964Z collecting ... collected 10 items / 9 deselected / 1 selected 2025-12-04T12:05:39.7943182Z stepcurrent: skipping 8 already run items. Running only test/distributed/fsdp/test_fsdp_comm.py::TestExplicitUnshardCUDA::test_unshard_async_use_orig_params_False_cuda 2025-12-04T12:05:39.7943225Z Running 1 items in this shard 2025-12-04T12:05:39.7943228Z 2025-12-04T12:05:39.7943530Z distributed/fsdp/test_fsdp_comm.py::TestExplicitUnshardCUDA::test_unshard_async_use_orig_params_False_cuda I1204 12:04:41.721000 425803 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 425872 2025-12-04T12:05:39.7943684Z I1204 12:04:41.722000 425803 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 425873 2025-12-04T12:05:39.7944169Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7944231Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7944710Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7944770Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7945840Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py:865: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at /var/lib/jenkins/workspace/torch/csrc/autograd/autograd_not_implemented_fallback.cpp:76.) 2025-12-04T12:05:39.7945964Z return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2025-12-04T12:05:39.7947006Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py:865: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at /var/lib/jenkins/workspace/torch/csrc/autograd/autograd_not_implemented_fallback.cpp:76.) 2025-12-04T12:05:39.7947128Z return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2025-12-04T12:05:39.7947279Z [rank0]:E1204 12:04:51.444000 425872 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7947451Z [rank0]:E1204 12:04:51.444000 425872 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7947738Z [rank0]:E1204 12:04:51.444000 425872 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7947894Z [rank0]:E1204 12:04:51.444000 425872 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7948177Z [rank0]:E1204 12:04:51.444000 425872 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7948301Z [rank0]:E1204 12:04:51.444000 425872 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7948578Z [rank0]:E1204 12:04:51.444000 425872 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7948725Z [rank0]:E1204 12:04:51.444000 425872 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7949001Z [rank0]:E1204 12:04:51.444000 425872 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7949147Z [rank0]:E1204 12:04:51.444000 425872 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7949422Z [rank0]:E1204 12:04:51.444000 425872 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7949559Z [rank0]:E1204 12:04:51.444000 425872 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7949833Z [rank0]:E1204 12:04:51.444000 425872 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7950004Z [rank0]:E1204 12:04:51.444000 425872 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7950470Z [rank0]:E1204 12:04:51.444000 425872 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_False_cuda! Caching allocator allocated memory was 512 and is now reported as 13824 on device 0. CUDA driver allocated memory was 2019557376 and is now 3489660928. 2025-12-04T12:05:39.7950585Z [rank0]:E1204 12:04:51.444000 425872 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7950810Z [rank0]:E1204 12:04:51.444000 425872 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7951162Z [rank0]:E1204 12:04:51.444000 425872 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_False_cuda 2025-12-04T12:05:39.7951274Z [rank0]:E1204 12:04:51.444000 425872 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7951481Z [rank0]:E1204 12:04:51.444000 425872 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7951657Z [rank0]:E1204 12:04:51.444000 425872 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:05:39.7951708Z dist init r=0, world=2 2025-12-04T12:05:39.7951844Z [rank1]:E1204 12:04:51.453000 425873 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7951999Z [rank1]:E1204 12:04:51.453000 425873 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7952284Z [rank1]:E1204 12:04:51.453000 425873 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7952434Z [rank1]:E1204 12:04:51.453000 425873 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7952717Z [rank1]:E1204 12:04:51.453000 425873 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7952840Z [rank1]:E1204 12:04:51.453000 425873 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7953115Z [rank1]:E1204 12:04:51.453000 425873 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7953260Z [rank1]:E1204 12:04:51.453000 425873 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7953531Z [rank1]:E1204 12:04:51.453000 425873 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7953676Z [rank1]:E1204 12:04:51.453000 425873 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7953948Z [rank1]:E1204 12:04:51.453000 425873 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7954083Z [rank1]:E1204 12:04:51.453000 425873 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7954382Z [rank1]:E1204 12:04:51.453000 425873 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7954529Z [rank1]:E1204 12:04:51.453000 425873 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7954995Z [rank1]:E1204 12:04:51.453000 425873 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_False_cuda! Caching allocator allocated memory was 512 and is now reported as 13824 on device 1. CUDA driver allocated memory was 1864368128 and is now 3334471680. 2025-12-04T12:05:39.7955108Z [rank1]:E1204 12:04:51.453000 425873 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7955304Z [rank1]:E1204 12:04:51.453000 425873 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7955652Z [rank1]:E1204 12:04:51.453000 425873 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_False_cuda 2025-12-04T12:05:39.7955773Z [rank1]:E1204 12:04:51.453000 425873 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7955993Z [rank1]:E1204 12:04:51.453000 425873 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7956154Z [rank1]:E1204 12:04:51.453000 425873 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:05:39.7956193Z dist init r=1, world=2 2025-12-04T12:05:39.7956523Z [rank0]:[W1204 12:04:51.501426032 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:05:39.7956565Z FAILED [11.7228s] [100%] 2025-12-04T12:05:39.7956567Z 2025-12-04T12:05:39.7956624Z =================================== FAILURES =================================== 2025-12-04T12:05:39.7956723Z ____ TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_False_cuda _____ 2025-12-04T12:05:39.7956769Z Traceback (most recent call last): 2025-12-04T12:05:39.7956932Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:05:39.7956974Z self._join_processes(fn) 2025-12-04T12:05:39.7957145Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:05:39.7957199Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:05:39.7957375Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:05:39.7957418Z raise RuntimeError(error) 2025-12-04T12:05:39.7957497Z RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7957542Z Traceback (most recent call last): 2025-12-04T12:05:39.7957700Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7957744Z getattr(self, test_name)() 2025-12-04T12:05:39.7957900Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7957933Z fn() 2025-12-04T12:05:39.7958082Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7958140Z method(*args, **kwargs) 2025-12-04T12:05:39.7958289Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7958329Z method(*args, **kwargs) 2025-12-04T12:05:39.7958479Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7958516Z with policy(): 2025-12-04T12:05:39.7958666Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7958708Z raise RuntimeError(msg) 2025-12-04T12:05:39.7959053Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_False_cuda! Caching allocator allocated memory was 512 and is now reported as 13824 on device 0. CUDA driver allocated memory was 2019557376 and is now 3489660928. 2025-12-04T12:05:39.7959055Z 2025-12-04T12:05:39.7959132Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7959359Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_False_cuda 2025-12-04T12:05:39.7959372Z 2025-12-04T12:05:39.7959459Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7959461Z 2025-12-04T12:05:39.7959463Z 2025-12-04T12:05:39.7959537Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:05:39.7959636Z Process 0 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:05:39.7959864Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-5389250ff332253c.xml - 2025-12-04T12:05:39.7959923Z =========================== short test summary info ============================ 2025-12-04T12:05:39.7960167Z FAILED [11.7228s] distributed/fsdp/test_fsdp_comm.py::TestExplicitUnshardCUDA::test_unshard_async_use_orig_params_False_cuda - RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:05:39.7960213Z Traceback (most recent call last): 2025-12-04T12:05:39.7960376Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7960418Z getattr(self, test_name)() 2025-12-04T12:05:39.7960579Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7960649Z fn() 2025-12-04T12:05:39.7960834Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7960874Z method(*args, **kwargs) 2025-12-04T12:05:39.7961024Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7961066Z method(*args, **kwargs) 2025-12-04T12:05:39.7961213Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7961251Z with policy(): 2025-12-04T12:05:39.7961402Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7961443Z raise RuntimeError(msg) 2025-12-04T12:05:39.7961789Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_False_cuda! Caching allocator allocated memory was 512 and is now reported as 13824 on device 0. CUDA driver allocated memory was 2019557376 and is now 3489660928. 2025-12-04T12:05:39.7961791Z 2025-12-04T12:05:39.7961865Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7962122Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_False_cuda 2025-12-04T12:05:39.7962124Z 2025-12-04T12:05:39.7962212Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7962275Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:05:39.7962336Z ======================= 1 failed, 9 deselected in 11.74s ======================= 2025-12-04T12:05:39.7962372Z Got exit code 1 2025-12-04T12:05:39.7962548Z FAILED CONSISTENTLY: test/distributed/fsdp/test_fsdp_comm.py::TestExplicitUnshardCUDA::test_unshard_async_use_orig_params_False_cuda 2025-12-04T12:05:39.7962675Z Test failed consistently, continuing with the rest of the tests due to continue-through-error being set 2025-12-04T12:05:39.7962859Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-1381b50e36560bda.xml 2025-12-04T12:05:39.7962918Z ============================= test session starts ============================== 2025-12-04T12:05:39.7963028Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:05:39.7963069Z cachedir: .pytest_cache 2025-12-04T12:05:39.7963236Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:05:39.7963281Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:05:39.7963320Z configfile: pytest.ini 2025-12-04T12:05:39.7963492Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:05:39.7963564Z collecting ... collected 10 items / 9 deselected / 1 selected 2025-12-04T12:05:39.7963616Z stepcurrent: skipping 9 already run items. 2025-12-04T12:05:39.7963658Z Running 1 items in this shard 2025-12-04T12:05:39.7963660Z 2025-12-04T12:05:39.7963960Z distributed/fsdp/test_fsdp_comm.py::TestExplicitUnshardCUDA::test_unshard_async_use_orig_params_True_cuda I1204 12:04:55.850000 426039 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 426108 2025-12-04T12:05:39.7964112Z I1204 12:04:55.851000 426039 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 426109 2025-12-04T12:05:39.7964602Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7964664Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7965143Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7965204Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7966277Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py:865: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at /var/lib/jenkins/workspace/torch/csrc/autograd/autograd_not_implemented_fallback.cpp:76.) 2025-12-04T12:05:39.7966401Z return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2025-12-04T12:05:39.7967448Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py:865: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at /var/lib/jenkins/workspace/torch/csrc/autograd/autograd_not_implemented_fallback.cpp:76.) 2025-12-04T12:05:39.7967570Z return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2025-12-04T12:05:39.7967721Z [rank1]:E1204 12:05:05.744000 426109 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7967882Z [rank1]:E1204 12:05:05.744000 426109 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7968181Z [rank1]:E1204 12:05:05.744000 426109 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7968335Z [rank1]:E1204 12:05:05.744000 426109 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7968619Z [rank1]:E1204 12:05:05.744000 426109 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7968743Z [rank1]:E1204 12:05:05.744000 426109 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7969019Z [rank1]:E1204 12:05:05.744000 426109 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7969166Z [rank1]:E1204 12:05:05.744000 426109 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7969443Z [rank1]:E1204 12:05:05.744000 426109 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7969588Z [rank1]:E1204 12:05:05.744000 426109 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7969861Z [rank1]:E1204 12:05:05.744000 426109 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7969996Z [rank1]:E1204 12:05:05.744000 426109 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7970272Z [rank1]:E1204 12:05:05.744000 426109 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7970419Z [rank1]:E1204 12:05:05.744000 426109 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7970949Z [rank1]:E1204 12:05:05.744000 426109 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_True_cuda! Caching allocator allocated memory was 512 and is now reported as 9216 on device 1. CUDA driver allocated memory was 1864368128 and is now 3334471680. 2025-12-04T12:05:39.7971064Z [rank1]:E1204 12:05:05.744000 426109 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7971256Z [rank1]:E1204 12:05:05.744000 426109 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7971607Z [rank1]:E1204 12:05:05.744000 426109 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_True_cuda 2025-12-04T12:05:39.7971721Z [rank1]:E1204 12:05:05.744000 426109 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7971930Z [rank1]:E1204 12:05:05.744000 426109 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7972105Z [rank1]:E1204 12:05:05.744000 426109 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:05:39.7972156Z dist init r=1, world=2 2025-12-04T12:05:39.7972293Z [rank0]:E1204 12:05:05.749000 426108 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7972450Z [rank0]:E1204 12:05:05.749000 426108 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7972734Z [rank0]:E1204 12:05:05.749000 426108 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7972884Z [rank0]:E1204 12:05:05.749000 426108 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7973167Z [rank0]:E1204 12:05:05.749000 426108 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7973291Z [rank0]:E1204 12:05:05.749000 426108 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7973564Z [rank0]:E1204 12:05:05.749000 426108 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7973711Z [rank0]:E1204 12:05:05.749000 426108 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7973985Z [rank0]:E1204 12:05:05.749000 426108 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7974132Z [rank0]:E1204 12:05:05.749000 426108 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7974402Z [rank0]:E1204 12:05:05.749000 426108 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7974537Z [rank0]:E1204 12:05:05.749000 426108 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7974844Z [rank0]:E1204 12:05:05.749000 426108 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7974990Z [rank0]:E1204 12:05:05.749000 426108 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7975455Z [rank0]:E1204 12:05:05.749000 426108 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_True_cuda! Caching allocator allocated memory was 512 and is now reported as 9216 on device 0. CUDA driver allocated memory was 2019557376 and is now 3489660928. 2025-12-04T12:05:39.7975568Z [rank0]:E1204 12:05:05.749000 426108 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7975761Z [rank0]:E1204 12:05:05.749000 426108 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7976111Z [rank0]:E1204 12:05:05.749000 426108 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_True_cuda 2025-12-04T12:05:39.7976235Z [rank0]:E1204 12:05:05.749000 426108 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7976442Z [rank0]:E1204 12:05:05.749000 426108 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7976616Z [rank0]:E1204 12:05:05.749000 426108 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:05:39.7976656Z dist init r=0, world=2 2025-12-04T12:05:39.7976986Z [rank0]:[W1204 12:05:06.866549717 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:05:39.7977026Z FAILED [12.0233s] [100%] 2025-12-04T12:05:39.7977028Z 2025-12-04T12:05:39.7977083Z =================================== FAILURES =================================== 2025-12-04T12:05:39.7977182Z _____ TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_True_cuda _____ 2025-12-04T12:05:39.7977226Z Traceback (most recent call last): 2025-12-04T12:05:39.7977389Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:05:39.7977431Z self._join_processes(fn) 2025-12-04T12:05:39.7977603Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:05:39.7977655Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:05:39.7977833Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:05:39.7977876Z raise RuntimeError(error) 2025-12-04T12:05:39.7977956Z RuntimeError: Process 1 exited with error code 10 and exception: 2025-12-04T12:05:39.7978001Z Traceback (most recent call last): 2025-12-04T12:05:39.7978159Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7978202Z getattr(self, test_name)() 2025-12-04T12:05:39.7978361Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7978395Z fn() 2025-12-04T12:05:39.7978544Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7978583Z method(*args, **kwargs) 2025-12-04T12:05:39.7978753Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7978794Z method(*args, **kwargs) 2025-12-04T12:05:39.7978940Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7978978Z with policy(): 2025-12-04T12:05:39.7979127Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7979170Z raise RuntimeError(msg) 2025-12-04T12:05:39.7979507Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_True_cuda! Caching allocator allocated memory was 512 and is now reported as 9216 on device 1. CUDA driver allocated memory was 1864368128 and is now 3334471680. 2025-12-04T12:05:39.7979510Z 2025-12-04T12:05:39.7979585Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7979809Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_True_cuda 2025-12-04T12:05:39.7979811Z 2025-12-04T12:05:39.7979899Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7979916Z 2025-12-04T12:05:39.7979917Z 2025-12-04T12:05:39.7979992Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:05:39.7980101Z Process 1 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:05:39.7980329Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-1381b50e36560bda.xml - 2025-12-04T12:05:39.7980388Z =========================== short test summary info ============================ 2025-12-04T12:05:39.7980671Z FAILED [12.0233s] distributed/fsdp/test_fsdp_comm.py::TestExplicitUnshardCUDA::test_unshard_async_use_orig_params_True_cuda - RuntimeError: Process 1 exited with error code 10 and exception: 2025-12-04T12:05:39.7980716Z Traceback (most recent call last): 2025-12-04T12:05:39.7980880Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7980922Z getattr(self, test_name)() 2025-12-04T12:05:39.7981080Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7981114Z fn() 2025-12-04T12:05:39.7981263Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7981303Z method(*args, **kwargs) 2025-12-04T12:05:39.7981451Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7981490Z method(*args, **kwargs) 2025-12-04T12:05:39.7981640Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7981676Z with policy(): 2025-12-04T12:05:39.7981827Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7981868Z raise RuntimeError(msg) 2025-12-04T12:05:39.7982205Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_True_cuda! Caching allocator allocated memory was 512 and is now reported as 9216 on device 1. CUDA driver allocated memory was 1864368128 and is now 3334471680. 2025-12-04T12:05:39.7982208Z 2025-12-04T12:05:39.7982284Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7982536Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_True_cuda 2025-12-04T12:05:39.7982538Z 2025-12-04T12:05:39.7982625Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7982687Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:05:39.7982749Z ======================= 1 failed, 9 deselected in 12.04s ======================= 2025-12-04T12:05:39.7982785Z Got exit code 1 2025-12-04T12:05:39.7982825Z Retrying single test... 2025-12-04T12:05:39.7983010Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-71be1f4697f6fba0.xml 2025-12-04T12:05:39.7983069Z ============================= test session starts ============================== 2025-12-04T12:05:39.7983179Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:05:39.7983220Z cachedir: .pytest_cache 2025-12-04T12:05:39.7983375Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:05:39.7983422Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:05:39.7983462Z configfile: pytest.ini 2025-12-04T12:05:39.7983622Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:05:39.7983711Z collecting ... collected 10 items / 9 deselected / 1 selected 2025-12-04T12:05:39.7983926Z stepcurrent: skipping 9 already run items. Running only test/distributed/fsdp/test_fsdp_comm.py::TestExplicitUnshardCUDA::test_unshard_async_use_orig_params_True_cuda 2025-12-04T12:05:39.7983985Z Running 1 items in this shard 2025-12-04T12:05:39.7983988Z 2025-12-04T12:05:39.7984284Z distributed/fsdp/test_fsdp_comm.py::TestExplicitUnshardCUDA::test_unshard_async_use_orig_params_True_cuda I1204 12:05:10.348000 426275 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 426344 2025-12-04T12:05:39.7984438Z I1204 12:05:10.349000 426275 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 426345 2025-12-04T12:05:39.7984924Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7984988Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7985470Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.7985531Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.7986582Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py:865: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at /var/lib/jenkins/workspace/torch/csrc/autograd/autograd_not_implemented_fallback.cpp:76.) 2025-12-04T12:05:39.7986727Z return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2025-12-04T12:05:39.7987777Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py:865: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at /var/lib/jenkins/workspace/torch/csrc/autograd/autograd_not_implemented_fallback.cpp:76.) 2025-12-04T12:05:39.7987903Z return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2025-12-04T12:05:39.7988043Z [rank0]:E1204 12:05:20.180000 426344 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7988204Z [rank0]:E1204 12:05:20.180000 426344 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7988502Z [rank0]:E1204 12:05:20.180000 426344 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7988666Z [rank0]:E1204 12:05:20.180000 426344 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7988951Z [rank0]:E1204 12:05:20.180000 426344 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7989075Z [rank0]:E1204 12:05:20.180000 426344 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7989349Z [rank0]:E1204 12:05:20.180000 426344 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7989496Z [rank0]:E1204 12:05:20.180000 426344 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7989771Z [rank0]:E1204 12:05:20.180000 426344 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7989916Z [rank0]:E1204 12:05:20.180000 426344 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7990192Z [rank0]:E1204 12:05:20.180000 426344 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7990327Z [rank0]:E1204 12:05:20.180000 426344 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7990630Z [rank0]:E1204 12:05:20.180000 426344 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7990779Z [rank0]:E1204 12:05:20.180000 426344 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7991273Z [rank0]:E1204 12:05:20.180000 426344 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_True_cuda! Caching allocator allocated memory was 512 and is now reported as 9216 on device 0. CUDA driver allocated memory was 2019557376 and is now 3489660928. 2025-12-04T12:05:39.7991388Z [rank0]:E1204 12:05:20.180000 426344 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7991581Z [rank0]:E1204 12:05:20.180000 426344 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7991930Z [rank0]:E1204 12:05:20.180000 426344 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_True_cuda 2025-12-04T12:05:39.7992041Z [rank0]:E1204 12:05:20.180000 426344 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7992250Z [rank0]:E1204 12:05:20.180000 426344 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7992412Z [rank0]:E1204 12:05:20.180000 426344 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:05:39.7992463Z dist init r=0, world=2 2025-12-04T12:05:39.7992600Z [rank1]:E1204 12:05:20.188000 426345 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.7992756Z [rank1]:E1204 12:05:20.188000 426345 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.7993053Z [rank1]:E1204 12:05:20.188000 426345 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7993207Z [rank1]:E1204 12:05:20.188000 426345 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.7993488Z [rank1]:E1204 12:05:20.188000 426345 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7993612Z [rank1]:E1204 12:05:20.188000 426345 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.7993884Z [rank1]:E1204 12:05:20.188000 426345 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7994030Z [rank1]:E1204 12:05:20.188000 426345 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7994303Z [rank1]:E1204 12:05:20.188000 426345 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.7994448Z [rank1]:E1204 12:05:20.188000 426345 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.7994723Z [rank1]:E1204 12:05:20.188000 426345 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.7994859Z [rank1]:E1204 12:05:20.188000 426345 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.7995132Z [rank1]:E1204 12:05:20.188000 426345 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.7995298Z [rank1]:E1204 12:05:20.188000 426345 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.7995759Z [rank1]:E1204 12:05:20.188000 426345 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_True_cuda! Caching allocator allocated memory was 512 and is now reported as 9216 on device 1. CUDA driver allocated memory was 1864368128 and is now 3334471680. 2025-12-04T12:05:39.7995872Z [rank1]:E1204 12:05:20.188000 426345 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7996065Z [rank1]:E1204 12:05:20.188000 426345 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.7996413Z [rank1]:E1204 12:05:20.188000 426345 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_True_cuda 2025-12-04T12:05:39.7996523Z [rank1]:E1204 12:05:20.188000 426345 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.7996730Z [rank1]:E1204 12:05:20.188000 426345 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.7996915Z [rank1]:E1204 12:05:20.188000 426345 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:05:39.7996964Z dist init r=1, world=2 2025-12-04T12:05:39.7997293Z [rank0]:[W1204 12:05:20.330680097 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:05:39.7997333Z FAILED [11.9245s] [100%] 2025-12-04T12:05:39.7997335Z 2025-12-04T12:05:39.7997391Z =================================== FAILURES =================================== 2025-12-04T12:05:39.7997489Z _____ TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_True_cuda _____ 2025-12-04T12:05:39.7997534Z Traceback (most recent call last): 2025-12-04T12:05:39.7997697Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:05:39.7997740Z self._join_processes(fn) 2025-12-04T12:05:39.7997911Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:05:39.7997964Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:05:39.7998139Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:05:39.7998181Z raise RuntimeError(error) 2025-12-04T12:05:39.7998260Z RuntimeError: Process 1 exited with error code 10 and exception: 2025-12-04T12:05:39.7998305Z Traceback (most recent call last): 2025-12-04T12:05:39.7998462Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.7998505Z getattr(self, test_name)() 2025-12-04T12:05:39.7998661Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.7998697Z fn() 2025-12-04T12:05:39.7998844Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.8000424Z method(*args, **kwargs) 2025-12-04T12:05:39.8000584Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.8000667Z method(*args, **kwargs) 2025-12-04T12:05:39.8000858Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.8000899Z with policy(): 2025-12-04T12:05:39.8001049Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.8001091Z raise RuntimeError(msg) 2025-12-04T12:05:39.8001432Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_True_cuda! Caching allocator allocated memory was 512 and is now reported as 9216 on device 1. CUDA driver allocated memory was 1864368128 and is now 3334471680. 2025-12-04T12:05:39.8001436Z 2025-12-04T12:05:39.8001513Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.8001737Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_True_cuda 2025-12-04T12:05:39.8001740Z 2025-12-04T12:05:39.8001827Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.8001829Z 2025-12-04T12:05:39.8001831Z 2025-12-04T12:05:39.8001908Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:05:39.8002008Z Process 1 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:05:39.8002240Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-71be1f4697f6fba0.xml - 2025-12-04T12:05:39.8002317Z =========================== short test summary info ============================ 2025-12-04T12:05:39.8002558Z FAILED [11.9245s] distributed/fsdp/test_fsdp_comm.py::TestExplicitUnshardCUDA::test_unshard_async_use_orig_params_True_cuda - RuntimeError: Process 1 exited with error code 10 and exception: 2025-12-04T12:05:39.8002602Z Traceback (most recent call last): 2025-12-04T12:05:39.8002767Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.8002809Z getattr(self, test_name)() 2025-12-04T12:05:39.8002967Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.8003002Z fn() 2025-12-04T12:05:39.8003152Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.8003195Z method(*args, **kwargs) 2025-12-04T12:05:39.8003342Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.8003382Z method(*args, **kwargs) 2025-12-04T12:05:39.8003529Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.8003566Z with policy(): 2025-12-04T12:05:39.8003718Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.8003759Z raise RuntimeError(msg) 2025-12-04T12:05:39.8004098Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_True_cuda! Caching allocator allocated memory was 512 and is now reported as 9216 on device 1. CUDA driver allocated memory was 1864368128 and is now 3334471680. 2025-12-04T12:05:39.8004102Z 2025-12-04T12:05:39.8004178Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.8004402Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_True_cuda 2025-12-04T12:05:39.8004404Z 2025-12-04T12:05:39.8004491Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.8004576Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:05:39.8004639Z ======================= 1 failed, 9 deselected in 11.94s ======================= 2025-12-04T12:05:39.8004676Z Got exit code 1 2025-12-04T12:05:39.8004716Z Retrying single test... 2025-12-04T12:05:39.8004903Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-5db489057f90ff1d.xml 2025-12-04T12:05:39.8004962Z ============================= test session starts ============================== 2025-12-04T12:05:39.8005076Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:05:39.8005117Z cachedir: .pytest_cache 2025-12-04T12:05:39.8005273Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:05:39.8005318Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:05:39.8005358Z configfile: pytest.ini 2025-12-04T12:05:39.8005521Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:05:39.8005595Z collecting ... collected 10 items / 9 deselected / 1 selected 2025-12-04T12:05:39.8005843Z stepcurrent: skipping 9 already run items. Running only test/distributed/fsdp/test_fsdp_comm.py::TestExplicitUnshardCUDA::test_unshard_async_use_orig_params_True_cuda 2025-12-04T12:05:39.8005897Z Running 1 items in this shard 2025-12-04T12:05:39.8005899Z 2025-12-04T12:05:39.8006210Z distributed/fsdp/test_fsdp_comm.py::TestExplicitUnshardCUDA::test_unshard_async_use_orig_params_True_cuda I1204 12:05:24.774000 426511 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 426580 2025-12-04T12:05:39.8006364Z I1204 12:05:24.774000 426511 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 426581 2025-12-04T12:05:39.8006857Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.8006921Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.8007403Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:05:39.8007461Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:05:39.8008528Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py:865: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at /var/lib/jenkins/workspace/torch/csrc/autograd/autograd_not_implemented_fallback.cpp:76.) 2025-12-04T12:05:39.8008653Z return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2025-12-04T12:05:39.8009719Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py:865: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at /var/lib/jenkins/workspace/torch/csrc/autograd/autograd_not_implemented_fallback.cpp:76.) 2025-12-04T12:05:39.8009842Z return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2025-12-04T12:05:39.8009984Z [rank1]:E1204 12:05:34.519000 426581 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.8010146Z [rank1]:E1204 12:05:34.519000 426581 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.8010436Z [rank1]:E1204 12:05:34.519000 426581 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.8010633Z [rank1]:E1204 12:05:34.519000 426581 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.8010934Z [rank1]:E1204 12:05:34.519000 426581 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.8011058Z [rank1]:E1204 12:05:34.519000 426581 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.8011335Z [rank1]:E1204 12:05:34.519000 426581 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.8011482Z [rank1]:E1204 12:05:34.519000 426581 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.8011757Z [rank1]:E1204 12:05:34.519000 426581 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.8011903Z [rank1]:E1204 12:05:34.519000 426581 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.8012175Z [rank1]:E1204 12:05:34.519000 426581 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.8012313Z [rank1]:E1204 12:05:34.519000 426581 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.8012589Z [rank1]:E1204 12:05:34.519000 426581 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.8012738Z [rank1]:E1204 12:05:34.519000 426581 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.8013212Z [rank1]:E1204 12:05:34.519000 426581 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_True_cuda! Caching allocator allocated memory was 512 and is now reported as 9216 on device 1. CUDA driver allocated memory was 1864368128 and is now 3334471680. 2025-12-04T12:05:39.8013356Z [rank1]:E1204 12:05:34.519000 426581 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.8013550Z [rank1]:E1204 12:05:34.519000 426581 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.8013899Z [rank1]:E1204 12:05:34.519000 426581 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_True_cuda 2025-12-04T12:05:39.8014011Z [rank1]:E1204 12:05:34.519000 426581 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.8014220Z [rank1]:E1204 12:05:34.519000 426581 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.8014383Z [rank1]:E1204 12:05:34.519000 426581 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:05:39.8014422Z dist init r=1, world=2 2025-12-04T12:05:39.8014556Z [rank0]:E1204 12:05:34.569000 426580 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:05:39.8014727Z [rank0]:E1204 12:05:34.569000 426580 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:05:39.8015014Z [rank0]:E1204 12:05:34.569000 426580 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.8015179Z [rank0]:E1204 12:05:34.569000 426580 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:05:39.8015460Z [rank0]:E1204 12:05:34.569000 426580 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.8015583Z [rank0]:E1204 12:05:34.569000 426580 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:05:39.8015854Z [rank0]:E1204 12:05:34.569000 426580 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.8016000Z [rank0]:E1204 12:05:34.569000 426580 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.8016273Z [rank0]:E1204 12:05:34.569000 426580 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.8016418Z [rank0]:E1204 12:05:34.569000 426580 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:05:39.8016688Z [rank0]:E1204 12:05:34.569000 426580 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.8016823Z [rank0]:E1204 12:05:34.569000 426580 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:05:39.8017098Z [rank0]:E1204 12:05:34.569000 426580 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.8017246Z [rank0]:E1204 12:05:34.569000 426580 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:05:39.8017727Z [rank0]:E1204 12:05:34.569000 426580 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_True_cuda! Caching allocator allocated memory was 512 and is now reported as 9216 on device 0. CUDA driver allocated memory was 2019557376 and is now 3489660928. 2025-12-04T12:05:39.8017843Z [rank0]:E1204 12:05:34.569000 426580 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.8018035Z [rank0]:E1204 12:05:34.569000 426580 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.8018385Z [rank0]:E1204 12:05:34.569000 426580 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_True_cuda 2025-12-04T12:05:39.8018496Z [rank0]:E1204 12:05:34.569000 426580 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:05:39.8018706Z [rank0]:E1204 12:05:34.569000 426580 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.8018866Z [rank0]:E1204 12:05:34.569000 426580 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:05:39.8018915Z dist init r=0, world=2 2025-12-04T12:05:39.8019246Z [rank0]:[W1204 12:05:34.703098188 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:05:39.8019296Z FAILED [11.7245s] [100%] 2025-12-04T12:05:39.8019298Z 2025-12-04T12:05:39.8019354Z =================================== FAILURES =================================== 2025-12-04T12:05:39.8019452Z _____ TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_True_cuda _____ 2025-12-04T12:05:39.8019499Z Traceback (most recent call last): 2025-12-04T12:05:39.8019660Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:05:39.8019704Z self._join_processes(fn) 2025-12-04T12:05:39.8019875Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:05:39.8019928Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:05:39.8020103Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:05:39.8020149Z raise RuntimeError(error) 2025-12-04T12:05:39.8020227Z RuntimeError: Process 1 exited with error code 10 and exception: 2025-12-04T12:05:39.8020272Z Traceback (most recent call last): 2025-12-04T12:05:39.8020431Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.8020473Z getattr(self, test_name)() 2025-12-04T12:05:39.8020668Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.8020703Z fn() 2025-12-04T12:05:39.8020882Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.8020923Z method(*args, **kwargs) 2025-12-04T12:05:39.8021072Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.8021112Z method(*args, **kwargs) 2025-12-04T12:05:39.8021262Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.8021298Z with policy(): 2025-12-04T12:05:39.8021477Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.8021519Z raise RuntimeError(msg) 2025-12-04T12:05:39.8021861Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_True_cuda! Caching allocator allocated memory was 512 and is now reported as 9216 on device 1. CUDA driver allocated memory was 1864368128 and is now 3334471680. 2025-12-04T12:05:39.8021864Z 2025-12-04T12:05:39.8021938Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.8022164Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_True_cuda 2025-12-04T12:05:39.8022166Z 2025-12-04T12:05:39.8022253Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.8022255Z 2025-12-04T12:05:39.8022257Z 2025-12-04T12:05:39.8022334Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:05:39.8022419Z Process 1 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:05:39.8022647Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-5db489057f90ff1d.xml - 2025-12-04T12:05:39.8022721Z =========================== short test summary info ============================ 2025-12-04T12:05:39.8022960Z FAILED [11.7245s] distributed/fsdp/test_fsdp_comm.py::TestExplicitUnshardCUDA::test_unshard_async_use_orig_params_True_cuda - RuntimeError: Process 1 exited with error code 10 and exception: 2025-12-04T12:05:39.8023023Z Traceback (most recent call last): 2025-12-04T12:05:39.8023184Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:05:39.8023228Z getattr(self, test_name)() 2025-12-04T12:05:39.8023385Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:05:39.8023420Z fn() 2025-12-04T12:05:39.8023568Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.8023610Z method(*args, **kwargs) 2025-12-04T12:05:39.8023757Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:05:39.8023799Z method(*args, **kwargs) 2025-12-04T12:05:39.8023946Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:05:39.8023983Z with policy(): 2025-12-04T12:05:39.8024133Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:05:39.8024173Z raise RuntimeError(msg) 2025-12-04T12:05:39.8024513Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_True_cuda! Caching allocator allocated memory was 512 and is now reported as 9216 on device 1. CUDA driver allocated memory was 1864368128 and is now 3334471680. 2025-12-04T12:05:39.8024517Z 2025-12-04T12:05:39.8024590Z To execute this test, run the following from the base repo dir: 2025-12-04T12:05:39.8024813Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_comm.py TestExplicitUnshardCUDA.test_unshard_async_use_orig_params_True_cuda 2025-12-04T12:05:39.8024817Z 2025-12-04T12:05:39.8024902Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:05:39.8024964Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:05:39.8025025Z ======================= 1 failed, 9 deselected in 11.74s ======================= 2025-12-04T12:05:39.8025062Z Got exit code 1 2025-12-04T12:05:39.8025254Z FAILED CONSISTENTLY: test/distributed/fsdp/test_fsdp_comm.py::TestExplicitUnshardCUDA::test_unshard_async_use_orig_params_True_cuda 2025-12-04T12:05:39.8025381Z Test failed consistently, continuing with the rest of the tests due to continue-through-error being set 2025-12-04T12:05:39.8025564Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-5b177065e5cfd95d.xml 2025-12-04T12:05:39.8025621Z ============================= test session starts ============================== 2025-12-04T12:05:39.8025733Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:05:39.8025806Z cachedir: .pytest_cache 2025-12-04T12:05:39.8025960Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:05:39.8026006Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:05:39.8026047Z configfile: pytest.ini 2025-12-04T12:05:39.8026209Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:05:39.8026284Z collecting ... collected 10 items / 10 deselected / 0 selected 2025-12-04T12:05:39.8026336Z stepcurrent: skipping 10 already run items. 2025-12-04T12:05:39.8026390Z Running 0 items in this shard 2025-12-04T12:05:39.8026392Z 2025-12-04T12:05:39.8026619Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_comm/distributed.fsdp.test_fsdp_comm-5b177065e5cfd95d.xml - 2025-12-04T12:05:39.8026689Z ============================ 10 deselected in 0.01s ============================ 2025-12-04T12:05:39.8028571Z The following tests failed consistently: ['test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_False_sharding_strategy0_cuda', 'test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_False_sharding_strategy1_cuda', 'test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_True_sharding_strategy0_cuda', 'test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_False_use_no_sync_True_sharding_strategy1_cuda', 'test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_False_sharding_strategy0_cuda', 'test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_False_sharding_strategy1_cuda', 'test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_True_sharding_strategy0_cuda', 'test/distributed/fsdp/test_fsdp_comm.py::TestCommunicationCUDA::test_communication_nested_model_True_use_no_sync_True_sharding_strategy1_cuda', 'test/distributed/fsdp/test_fsdp_comm.py::TestExplicitUnshardCUDA::test_unshard_async_use_orig_params_False_cuda', 'test/distributed/fsdp/test_fsdp_comm.py::TestExplicitUnshardCUDA::test_unshard_async_use_orig_params_True_cuda'] 2025-12-04T12:05:39.8028577Z 2025-12-04T12:05:39.8028759Z FINISHED PRINTING LOG FILE of distributed/fsdp/test_fsdp_comm 1/1 (test/test-reports/distributed.fsdp.test_fsdp_comm_1.1_4659699ad34baeee_.log) 2025-12-04T12:05:39.8028763Z 2025-12-04T12:05:39.8028886Z Finished distributed/fsdp/test_fsdp_comm 1/1 ... [2025-12-04 12:05:39.662269][4974968.512198989], took 7.39min 2025-12-04T12:05:39.8029152Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T12:05:39.8029242Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T12:05:39.8029338Z GITHUB_RUN_ID, GITHUB_RUN_ATTEMPT, or ARTIFACTS_FILE_SUFFIX not set, not uploading 2025-12-04T12:05:39.8029385Z Uploading artifacts took 0.00 seconds 2025-12-04T12:05:39.8029465Z distributed/fsdp/test_fsdp_comm 1/1 failed! 2025-12-04T12:05:39.8029585Z Running distributed/fsdp/test_fsdp_clip_grad_norm 1/1 ... [2025-12-04 12:05:39.670375][4974968.520307621] 2025-12-04T12:05:39.8029632Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T12:05:39.8029952Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/fsdp/test_fsdp_clip_grad_norm.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 12:05:39.670869] 2025-12-04T12:08:44.4376449Z 2025-12-04T12:08:44.4377938Z PRINTING LOG FILE of distributed/fsdp/test_fsdp_clip_grad_norm 1/1 (test/test-reports/distributed.fsdp.test_fsdp_clip_grad_norm_1.1_2ac95aece383090e_.log) 2025-12-04T12:08:44.4379523Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_clip_grad_norm/distributed.fsdp.test_fsdp_clip_grad_norm-7870c95953395460.xml 2025-12-04T12:08:44.4380752Z ============================= test session starts ============================== 2025-12-04T12:08:44.4381516Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:08:44.4382166Z cachedir: .pytest_cache 2025-12-04T12:08:44.4382915Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:08:44.4384503Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:08:44.4384907Z configfile: pytest.ini 2025-12-04T12:08:44.4385864Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:08:44.4386671Z collecting ... collected 4 items 2025-12-04T12:08:44.4387165Z stepcurrent: Cannot find last run test, not skipping 2025-12-04T12:08:44.4389353Z Running 4 items in this shard: test/distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_ddp_parity_cuda, test/distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_low_precision_grads_cuda, test/distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_no_gradients_cuda, test/distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_non_root_cuda 2025-12-04T12:08:44.4392008Z 2025-12-04T12:08:44.4392933Z distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_ddp_parity_cuda I1204 12:05:41.489000 426815 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 426884 2025-12-04T12:08:44.4394486Z I1204 12:05:41.490000 426815 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 426885 2025-12-04T12:08:44.4395626Z I1204 12:05:41.491000 426815 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 426886 2025-12-04T12:08:44.4396747Z I1204 12:05:41.491000 426815 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 426887 2025-12-04T12:08:44.4398587Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:08:44.4400061Z self.encoder = TransformerEncoder( 2025-12-04T12:08:44.4401595Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:08:44.4403060Z self.encoder = TransformerEncoder( 2025-12-04T12:08:44.4404480Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:08:44.4406164Z self.encoder = TransformerEncoder( 2025-12-04T12:08:44.4407602Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:08:44.4409018Z self.encoder = TransformerEncoder( 2025-12-04T12:08:44.4411018Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.4412949Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.4414899Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.4416866Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.4418777Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.4420786Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.4422681Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.4424578Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.4425839Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/c10d_logger.py:83: UserWarning: barrier(): using the device under current context. You can specify `device_id` in `init_process_group` to mute this warning. 2025-12-04T12:08:44.4427051Z return func(*args, **kwargs) 2025-12-04T12:08:44.4428216Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/wrap.py:91: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4429397Z return fsdp_fn(module, **kwargs) 2025-12-04T12:08:44.4430555Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/wrap.py:91: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4431775Z return fsdp_fn(module, **kwargs) 2025-12-04T12:08:44.4432932Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/wrap.py:91: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4434095Z return fsdp_fn(module, **kwargs) 2025-12-04T12:08:44.4435245Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/wrap.py:91: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4436394Z return fsdp_fn(module, **kwargs) 2025-12-04T12:08:44.4437679Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_fsdp.py:395: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4438875Z fsdp_model = FSDP( 2025-12-04T12:08:44.4440021Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_fsdp.py:395: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4441244Z fsdp_model = FSDP( 2025-12-04T12:08:44.4442379Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_fsdp.py:395: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4443557Z fsdp_model = FSDP( 2025-12-04T12:08:44.4444693Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_fsdp.py:395: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4445861Z fsdp_model = FSDP( 2025-12-04T12:08:44.4450283Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py:865: UserWarning: The AccumulateGrad node's stream does not match the stream of the node that produced the incoming gradient. This may incur unnecessary synchronization and break CUDA graph capture if the AccumulateGrad node's stream is the default stream. This mismatch is caused by an AccumulateGrad node created prior to the current iteration being kept alive. This can happen if the autograd graph is still being kept alive by tensors such as the loss, or if you are using DDP, which will stash a reference to the node. To resolve the mismatch, delete all references to the autograd graph or ensure that DDP initialization is performed under the same stream as subsequent forwards. If the mismatch is intentional, you can use torch.autograd.graph.set_warn_on_accumulate_grad_stream_mismatch(False) to suppress this warning. (Triggered internally at /var/lib/jenkins/workspace/torch/csrc/autograd/input_buffer.cpp:240.) 2025-12-04T12:08:44.4455134Z return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2025-12-04T12:08:44.4459839Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py:865: UserWarning: The AccumulateGrad node's stream does not match the stream of the node that produced the incoming gradient. This may incur unnecessary synchronization and break CUDA graph capture if the AccumulateGrad node's stream is the default stream. This mismatch is caused by an AccumulateGrad node created prior to the current iteration being kept alive. This can happen if the autograd graph is still being kept alive by tensors such as the loss, or if you are using DDP, which will stash a reference to the node. To resolve the mismatch, delete all references to the autograd graph or ensure that DDP initialization is performed under the same stream as subsequent forwards. If the mismatch is intentional, you can use torch.autograd.graph.set_warn_on_accumulate_grad_stream_mismatch(False) to suppress this warning. (Triggered internally at /var/lib/jenkins/workspace/torch/csrc/autograd/input_buffer.cpp:240.) 2025-12-04T12:08:44.4464537Z return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2025-12-04T12:08:44.4469302Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py:865: UserWarning: The AccumulateGrad node's stream does not match the stream of the node that produced the incoming gradient. This may incur unnecessary synchronization and break CUDA graph capture if the AccumulateGrad node's stream is the default stream. This mismatch is caused by an AccumulateGrad node created prior to the current iteration being kept alive. This can happen if the autograd graph is still being kept alive by tensors such as the loss, or if you are using DDP, which will stash a reference to the node. To resolve the mismatch, delete all references to the autograd graph or ensure that DDP initialization is performed under the same stream as subsequent forwards. If the mismatch is intentional, you can use torch.autograd.graph.set_warn_on_accumulate_grad_stream_mismatch(False) to suppress this warning. (Triggered internally at /var/lib/jenkins/workspace/torch/csrc/autograd/input_buffer.cpp:240.) 2025-12-04T12:08:44.4474006Z return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2025-12-04T12:08:44.4478701Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py:865: UserWarning: The AccumulateGrad node's stream does not match the stream of the node that produced the incoming gradient. This may incur unnecessary synchronization and break CUDA graph capture if the AccumulateGrad node's stream is the default stream. This mismatch is caused by an AccumulateGrad node created prior to the current iteration being kept alive. This can happen if the autograd graph is still being kept alive by tensors such as the loss, or if you are using DDP, which will stash a reference to the node. To resolve the mismatch, delete all references to the autograd graph or ensure that DDP initialization is performed under the same stream as subsequent forwards. If the mismatch is intentional, you can use torch.autograd.graph.set_warn_on_accumulate_grad_stream_mismatch(False) to suppress this warning. (Triggered internally at /var/lib/jenkins/workspace/torch/csrc/autograd/input_buffer.cpp:240.) 2025-12-04T12:08:44.4483460Z return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2025-12-04T12:08:44.4484883Z /var/lib/jenkins/pytorch/test/distributed/fsdp/test_fsdp_clip_grad_norm.py:123: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4486065Z fsdp_model.transformer.encoder = FSDP( 2025-12-04T12:08:44.4487233Z /var/lib/jenkins/pytorch/test/distributed/fsdp/test_fsdp_clip_grad_norm.py:123: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4488392Z fsdp_model.transformer.encoder = FSDP( 2025-12-04T12:08:44.4489548Z /var/lib/jenkins/pytorch/test/distributed/fsdp/test_fsdp_clip_grad_norm.py:123: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4490735Z fsdp_model.transformer.encoder = FSDP( 2025-12-04T12:08:44.4491897Z /var/lib/jenkins/pytorch/test/distributed/fsdp/test_fsdp_clip_grad_norm.py:123: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4493055Z fsdp_model.transformer.encoder = FSDP( 2025-12-04T12:08:44.4493814Z [rank2]:E1204 12:06:00.179000 426886 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.4494956Z [rank2]:E1204 12:06:00.179000 426886 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.4496594Z [rank2]:E1204 12:06:00.179000 426886 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.4498196Z [rank2]:E1204 12:06:00.179000 426886 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.4499787Z [rank2]:E1204 12:06:00.179000 426886 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.4501325Z [rank2]:E1204 12:06:00.179000 426886 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.4502896Z [rank2]:E1204 12:06:00.179000 426886 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4504423Z [rank2]:E1204 12:06:00.179000 426886 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.4505950Z [rank2]:E1204 12:06:00.179000 426886 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4507483Z [rank2]:E1204 12:06:00.179000 426886 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.4509015Z [rank2]:E1204 12:06:00.179000 426886 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.4510497Z [rank2]:E1204 12:06:00.179000 426886 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.4512049Z [rank2]:E1204 12:06:00.179000 426886 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.4513617Z [rank2]:E1204 12:06:00.179000 426886 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.4515729Z [rank2]:E1204 12:06:00.179000 426886 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_ddp_parity_cuda! Caching allocator allocated memory was 512 and is now reported as 1997312 on device 2. CUDA driver allocated memory was 2300575744 and is now 3997171712. 2025-12-04T12:08:44.4517665Z [rank2]:E1204 12:06:00.179000 426886 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.4518813Z [rank2]:E1204 12:06:00.179000 426886 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.4520710Z [rank2]:E1204 12:06:00.179000 426886 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_ddp_parity_cuda 2025-12-04T12:08:44.4522275Z [rank2]:E1204 12:06:00.179000 426886 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.4523473Z [rank2]:E1204 12:06:00.179000 426886 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.4524840Z [rank2]:E1204 12:06:00.179000 426886 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:08:44.4525632Z dist init r=2, world=4 2025-12-04T12:08:44.4526308Z [rank3]:E1204 12:06:00.226000 426887 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.4527413Z [rank3]:E1204 12:06:00.226000 426887 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.4529008Z [rank3]:E1204 12:06:00.226000 426887 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.4530580Z [rank3]:E1204 12:06:00.226000 426887 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.4532310Z [rank3]:E1204 12:06:00.226000 426887 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.4533785Z [rank3]:E1204 12:06:00.226000 426887 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.4535234Z [rank3]:E1204 12:06:00.226000 426887 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4536757Z [rank3]:E1204 12:06:00.226000 426887 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.4538283Z [rank3]:E1204 12:06:00.226000 426887 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4539806Z [rank3]:E1204 12:06:00.226000 426887 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.4541372Z [rank3]:E1204 12:06:00.226000 426887 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.4542890Z [rank3]:E1204 12:06:00.226000 426887 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.4544381Z [rank3]:E1204 12:06:00.226000 426887 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.4545947Z [rank3]:E1204 12:06:00.226000 426887 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.4548010Z [rank3]:E1204 12:06:00.226000 426887 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_ddp_parity_cuda! Caching allocator allocated memory was 512 and is now reported as 1963520 on device 3. CUDA driver allocated memory was 2250244096 and is now 3946840064. 2025-12-04T12:08:44.4549934Z [rank3]:E1204 12:06:00.226000 426887 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.4551115Z [rank3]:E1204 12:06:00.226000 426887 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.4552954Z [rank3]:E1204 12:06:00.226000 426887 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_ddp_parity_cuda 2025-12-04T12:08:44.4554505Z [rank3]:E1204 12:06:00.226000 426887 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.4555702Z [rank3]:E1204 12:06:00.226000 426887 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.4557058Z [rank3]:E1204 12:06:00.226000 426887 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:08:44.4557844Z dist init r=3, world=4 2025-12-04T12:08:44.4558508Z [rank1]:E1204 12:06:00.246000 426885 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.4559616Z [rank1]:E1204 12:06:00.246000 426885 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.4561274Z [rank1]:E1204 12:06:00.246000 426885 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.4562935Z [rank1]:E1204 12:06:00.246000 426885 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.4564498Z [rank1]:E1204 12:06:00.246000 426885 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.4565968Z [rank1]:E1204 12:06:00.246000 426885 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.4567606Z [rank1]:E1204 12:06:00.246000 426885 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4569338Z [rank1]:E1204 12:06:00.246000 426885 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.4570943Z [rank1]:E1204 12:06:00.246000 426885 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4572467Z [rank1]:E1204 12:06:00.246000 426885 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.4574028Z [rank1]:E1204 12:06:00.246000 426885 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.4575558Z [rank1]:E1204 12:06:00.246000 426885 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.4577046Z [rank1]:E1204 12:06:00.246000 426885 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.4578582Z [rank1]:E1204 12:06:00.246000 426885 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.4580719Z [rank1]:E1204 12:06:00.246000 426885 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_ddp_parity_cuda! Caching allocator allocated memory was 512 and is now reported as 1929728 on device 1. CUDA driver allocated memory was 2317352960 and is now 4013948928. 2025-12-04T12:08:44.4582641Z [rank1]:E1204 12:06:00.246000 426885 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.4583789Z [rank1]:E1204 12:06:00.246000 426885 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.4585617Z [rank1]:E1204 12:06:00.246000 426885 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_ddp_parity_cuda 2025-12-04T12:08:44.4587173Z [rank1]:E1204 12:06:00.246000 426885 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.4588365Z [rank1]:E1204 12:06:00.246000 426885 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.4589725Z [rank1]:E1204 12:06:00.246000 426885 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:08:44.4590519Z dist init r=1, world=4 2025-12-04T12:08:44.4591316Z [rank0]:E1204 12:06:00.247000 426884 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.4592416Z [rank0]:E1204 12:06:00.247000 426884 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.4594096Z [rank0]:E1204 12:06:00.247000 426884 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.4595664Z [rank0]:E1204 12:06:00.247000 426884 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.4597243Z [rank0]:E1204 12:06:00.247000 426884 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.4598716Z [rank0]:E1204 12:06:00.247000 426884 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.4600160Z [rank0]:E1204 12:06:00.247000 426884 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4601776Z [rank0]:E1204 12:06:00.247000 426884 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.4603302Z [rank0]:E1204 12:06:00.247000 426884 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4604860Z [rank0]:E1204 12:06:00.247000 426884 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.4606446Z [rank0]:E1204 12:06:00.247000 426884 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.4607936Z [rank0]:E1204 12:06:00.247000 426884 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.4609444Z [rank0]:E1204 12:06:00.247000 426884 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.4611070Z [rank0]:E1204 12:06:00.247000 426884 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.4613127Z [rank0]:E1204 12:06:00.247000 426884 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_ddp_parity_cuda! Caching allocator allocated memory was 512 and is now reported as 1929728 on device 0. CUDA driver allocated memory was 2459959296 and is now 4156555264. 2025-12-04T12:08:44.4615054Z [rank0]:E1204 12:06:00.247000 426884 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.4616200Z [rank0]:E1204 12:06:00.247000 426884 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.4618037Z [rank0]:E1204 12:06:00.247000 426884 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_ddp_parity_cuda 2025-12-04T12:08:44.4619586Z [rank0]:E1204 12:06:00.247000 426884 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.4620852Z [rank0]:E1204 12:06:00.247000 426884 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.4622209Z [rank0]:E1204 12:06:00.247000 426884 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:08:44.4622996Z dist init r=0, world=4 2025-12-04T12:08:44.4624458Z [rank0]:[W1204 12:06:00.491148406 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:08:44.4625814Z FAILED [21.0517s] [ 25%] 2025-12-04T12:08:44.4626034Z 2025-12-04T12:08:44.4626238Z =================================== FAILURES =================================== 2025-12-04T12:08:44.4626843Z __________________ TestClipGradNormCUDA.test_ddp_parity_cuda ___________________ 2025-12-04T12:08:44.4627405Z Traceback (most recent call last): 2025-12-04T12:08:44.4628224Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:08:44.4629038Z self._join_processes(fn) 2025-12-04T12:08:44.4629847Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:08:44.4630806Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:08:44.4631697Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:08:44.4632550Z raise RuntimeError(error) 2025-12-04T12:08:44.4633042Z RuntimeError: Process 2 exited with error code 10 and exception: 2025-12-04T12:08:44.4633637Z Traceback (most recent call last): 2025-12-04T12:08:44.4634421Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.4635255Z getattr(self, test_name)() 2025-12-04T12:08:44.4636018Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.4636779Z fn() 2025-12-04T12:08:44.4637442Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4638208Z method(*args, **kwargs) 2025-12-04T12:08:44.4638942Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4639699Z method(*args, **kwargs) 2025-12-04T12:08:44.4640420Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.4641304Z with policy(): 2025-12-04T12:08:44.4641999Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.4642758Z raise RuntimeError(msg) 2025-12-04T12:08:44.4644007Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_ddp_parity_cuda! Caching allocator allocated memory was 512 and is now reported as 1997312 on device 2. CUDA driver allocated memory was 2300575744 and is now 3997171712. 2025-12-04T12:08:44.4645144Z 2025-12-04T12:08:44.4645392Z To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.4646414Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_ddp_parity_cuda 2025-12-04T12:08:44.4647190Z 2025-12-04T12:08:44.4647481Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.4647899Z 2025-12-04T12:08:44.4647904Z 2025-12-04T12:08:44.4648167Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:08:44.4648836Z Process 2 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:08:44.4650088Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_clip_grad_norm/distributed.fsdp.test_fsdp_clip_grad_norm-7870c95953395460.xml - 2025-12-04T12:08:44.4651320Z =========================== short test summary info ============================ 2025-12-04T12:08:44.4652446Z FAILED [21.0517s] distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_ddp_parity_cuda - RuntimeError: Process 2 exited with error code 10 and exception: 2025-12-04T12:08:44.4653416Z Traceback (most recent call last): 2025-12-04T12:08:44.4654213Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.4655014Z getattr(self, test_name)() 2025-12-04T12:08:44.4655771Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.4656538Z fn() 2025-12-04T12:08:44.4657193Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4657944Z method(*args, **kwargs) 2025-12-04T12:08:44.4658661Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4659408Z method(*args, **kwargs) 2025-12-04T12:08:44.4660132Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.4660922Z with policy(): 2025-12-04T12:08:44.4661612Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.4662408Z raise RuntimeError(msg) 2025-12-04T12:08:44.4663663Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_ddp_parity_cuda! Caching allocator allocated memory was 512 and is now reported as 1997312 on device 2. CUDA driver allocated memory was 2300575744 and is now 3997171712. 2025-12-04T12:08:44.4664855Z 2025-12-04T12:08:44.4665100Z To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.4666115Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_ddp_parity_cuda 2025-12-04T12:08:44.4666886Z 2025-12-04T12:08:44.4667180Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.4667795Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:08:44.4668310Z ============================== 1 failed in 21.06s ============================== 2025-12-04T12:08:44.4668739Z Got exit code 1 2025-12-04T12:08:44.4669051Z Retrying single test... 2025-12-04T12:08:44.4669961Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_clip_grad_norm/distributed.fsdp.test_fsdp_clip_grad_norm-cd646c22ce7d8455.xml 2025-12-04T12:08:44.4671026Z ============================= test session starts ============================== 2025-12-04T12:08:44.4671721Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:08:44.4672337Z cachedir: .pytest_cache 2025-12-04T12:08:44.4673067Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:08:44.4673843Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:08:44.4674227Z configfile: pytest.ini 2025-12-04T12:08:44.4674966Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:08:44.4675850Z collecting ... collected 4 items / 3 deselected / 1 selected 2025-12-04T12:08:44.4676827Z stepcurrent: skipping 0 already run items. Running only test/distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_ddp_parity_cuda 2025-12-04T12:08:44.4677713Z Running 1 items in this shard 2025-12-04T12:08:44.4677948Z 2025-12-04T12:08:44.4678869Z distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_ddp_parity_cuda I1204 12:06:05.296000 427725 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 427794 2025-12-04T12:08:44.4680488Z I1204 12:06:05.297000 427725 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 427795 2025-12-04T12:08:44.4681660Z I1204 12:06:05.298000 427725 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 427796 2025-12-04T12:08:44.4682777Z I1204 12:06:05.298000 427725 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 427797 2025-12-04T12:08:44.4684586Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:08:44.4686035Z self.encoder = TransformerEncoder( 2025-12-04T12:08:44.4687465Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:08:44.4688889Z self.encoder = TransformerEncoder( 2025-12-04T12:08:44.4690299Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:08:44.4691813Z self.encoder = TransformerEncoder( 2025-12-04T12:08:44.4693214Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:08:44.4694670Z self.encoder = TransformerEncoder( 2025-12-04T12:08:44.4696557Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.4698504Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.4700432Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.4702393Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.4704299Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.4706201Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.4708093Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.4709983Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.4711374Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/c10d_logger.py:83: UserWarning: barrier(): using the device under current context. You can specify `device_id` in `init_process_group` to mute this warning. 2025-12-04T12:08:44.4712586Z return func(*args, **kwargs) 2025-12-04T12:08:44.4713753Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/wrap.py:91: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4714935Z return fsdp_fn(module, **kwargs) 2025-12-04T12:08:44.4716087Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/wrap.py:91: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4717262Z return fsdp_fn(module, **kwargs) 2025-12-04T12:08:44.4718414Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/wrap.py:91: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4719572Z return fsdp_fn(module, **kwargs) 2025-12-04T12:08:44.4720760Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/wrap.py:91: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4721974Z return fsdp_fn(module, **kwargs) 2025-12-04T12:08:44.4723157Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_fsdp.py:395: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4724383Z fsdp_model = FSDP( 2025-12-04T12:08:44.4725529Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_fsdp.py:395: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4726708Z fsdp_model = FSDP( 2025-12-04T12:08:44.4727848Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_fsdp.py:395: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4729027Z fsdp_model = FSDP( 2025-12-04T12:08:44.4730151Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_fsdp.py:395: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4731393Z fsdp_model = FSDP( 2025-12-04T12:08:44.4735774Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py:865: UserWarning: The AccumulateGrad node's stream does not match the stream of the node that produced the incoming gradient. This may incur unnecessary synchronization and break CUDA graph capture if the AccumulateGrad node's stream is the default stream. This mismatch is caused by an AccumulateGrad node created prior to the current iteration being kept alive. This can happen if the autograd graph is still being kept alive by tensors such as the loss, or if you are using DDP, which will stash a reference to the node. To resolve the mismatch, delete all references to the autograd graph or ensure that DDP initialization is performed under the same stream as subsequent forwards. If the mismatch is intentional, you can use torch.autograd.graph.set_warn_on_accumulate_grad_stream_mismatch(False) to suppress this warning. (Triggered internally at /var/lib/jenkins/workspace/torch/csrc/autograd/input_buffer.cpp:240.) 2025-12-04T12:08:44.4738992Z return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2025-12-04T12:08:44.4742046Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py:865: UserWarning: The AccumulateGrad node's stream does not match the stream of the node that produced the incoming gradient. This may incur unnecessary synchronization and break CUDA graph capture if the AccumulateGrad node's stream is the default stream. This mismatch is caused by an AccumulateGrad node created prior to the current iteration being kept alive. This can happen if the autograd graph is still being kept alive by tensors such as the loss, or if you are using DDP, which will stash a reference to the node. To resolve the mismatch, delete all references to the autograd graph or ensure that DDP initialization is performed under the same stream as subsequent forwards. If the mismatch is intentional, you can use torch.autograd.graph.set_warn_on_accumulate_grad_stream_mismatch(False) to suppress this warning. (Triggered internally at /var/lib/jenkins/workspace/torch/csrc/autograd/input_buffer.cpp:240.) 2025-12-04T12:08:44.4744967Z return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2025-12-04T12:08:44.4747902Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py:865: UserWarning: The AccumulateGrad node's stream does not match the stream of the node that produced the incoming gradient. This may incur unnecessary synchronization and break CUDA graph capture if the AccumulateGrad node's stream is the default stream. This mismatch is caused by an AccumulateGrad node created prior to the current iteration being kept alive. This can happen if the autograd graph is still being kept alive by tensors such as the loss, or if you are using DDP, which will stash a reference to the node. To resolve the mismatch, delete all references to the autograd graph or ensure that DDP initialization is performed under the same stream as subsequent forwards. If the mismatch is intentional, you can use torch.autograd.graph.set_warn_on_accumulate_grad_stream_mismatch(False) to suppress this warning. (Triggered internally at /var/lib/jenkins/workspace/torch/csrc/autograd/input_buffer.cpp:240.) 2025-12-04T12:08:44.4750897Z return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2025-12-04T12:08:44.4753801Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py:865: UserWarning: The AccumulateGrad node's stream does not match the stream of the node that produced the incoming gradient. This may incur unnecessary synchronization and break CUDA graph capture if the AccumulateGrad node's stream is the default stream. This mismatch is caused by an AccumulateGrad node created prior to the current iteration being kept alive. This can happen if the autograd graph is still being kept alive by tensors such as the loss, or if you are using DDP, which will stash a reference to the node. To resolve the mismatch, delete all references to the autograd graph or ensure that DDP initialization is performed under the same stream as subsequent forwards. If the mismatch is intentional, you can use torch.autograd.graph.set_warn_on_accumulate_grad_stream_mismatch(False) to suppress this warning. (Triggered internally at /var/lib/jenkins/workspace/torch/csrc/autograd/input_buffer.cpp:240.) 2025-12-04T12:08:44.4756697Z return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2025-12-04T12:08:44.4757588Z /var/lib/jenkins/pytorch/test/distributed/fsdp/test_fsdp_clip_grad_norm.py:123: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4758327Z fsdp_model.transformer.encoder = FSDP( 2025-12-04T12:08:44.4759063Z /var/lib/jenkins/pytorch/test/distributed/fsdp/test_fsdp_clip_grad_norm.py:123: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4759792Z fsdp_model.transformer.encoder = FSDP( 2025-12-04T12:08:44.4760517Z /var/lib/jenkins/pytorch/test/distributed/fsdp/test_fsdp_clip_grad_norm.py:123: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4761284Z fsdp_model.transformer.encoder = FSDP( 2025-12-04T12:08:44.4762058Z /var/lib/jenkins/pytorch/test/distributed/fsdp/test_fsdp_clip_grad_norm.py:123: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4762775Z fsdp_model.transformer.encoder = FSDP( 2025-12-04T12:08:44.4763247Z [rank2]:E1204 12:06:23.755000 427796 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.4763950Z [rank2]:E1204 12:06:23.755000 427796 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.4764958Z [rank2]:E1204 12:06:23.755000 427796 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.4765949Z [rank2]:E1204 12:06:23.755000 427796 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.4767195Z [rank2]:E1204 12:06:23.755000 427796 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.4768272Z [rank2]:E1204 12:06:23.755000 427796 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.4769396Z [rank2]:E1204 12:06:23.755000 427796 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4770380Z [rank2]:E1204 12:06:23.755000 427796 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.4771385Z [rank2]:E1204 12:06:23.755000 427796 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4772342Z [rank2]:E1204 12:06:23.755000 427796 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.4773303Z [rank2]:E1204 12:06:23.755000 427796 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.4774242Z [rank2]:E1204 12:06:23.755000 427796 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.4775181Z [rank2]:E1204 12:06:23.755000 427796 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.4776138Z [rank2]:E1204 12:06:23.755000 427796 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.4777450Z [rank2]:E1204 12:06:23.755000 427796 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_ddp_parity_cuda! Caching allocator allocated memory was 512 and is now reported as 1929728 on device 2. CUDA driver allocated memory was 2300575744 and is now 3997171712. 2025-12-04T12:08:44.4778667Z [rank2]:E1204 12:06:23.755000 427796 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.4779388Z [rank2]:E1204 12:06:23.755000 427796 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.4780547Z [rank2]:E1204 12:06:23.755000 427796 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_ddp_parity_cuda 2025-12-04T12:08:44.4781626Z [rank2]:E1204 12:06:23.755000 427796 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.4782376Z [rank2]:E1204 12:06:23.755000 427796 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.4783229Z [rank2]:E1204 12:06:23.755000 427796 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:08:44.4783725Z dist init r=2, world=4 2025-12-04T12:08:44.4784145Z [rank1]:E1204 12:06:23.766000 427795 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.4784837Z [rank1]:E1204 12:06:23.766000 427795 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.4785849Z [rank1]:E1204 12:06:23.766000 427795 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.4786839Z [rank1]:E1204 12:06:23.766000 427795 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.4787822Z [rank1]:E1204 12:06:23.766000 427795 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.4788772Z [rank1]:E1204 12:06:23.766000 427795 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.4789711Z [rank1]:E1204 12:06:23.766000 427795 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4790715Z [rank1]:E1204 12:06:23.766000 427795 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.4791673Z [rank1]:E1204 12:06:23.766000 427795 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4792633Z [rank1]:E1204 12:06:23.766000 427795 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.4793590Z [rank1]:E1204 12:06:23.766000 427795 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.4794528Z [rank1]:E1204 12:06:23.766000 427795 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.4795480Z [rank1]:E1204 12:06:23.766000 427795 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.4796459Z [rank1]:E1204 12:06:23.766000 427795 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.4797752Z [rank1]:E1204 12:06:23.766000 427795 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_ddp_parity_cuda! Caching allocator allocated memory was 512 and is now reported as 1997312 on device 1. CUDA driver allocated memory was 2317352960 and is now 4013948928. 2025-12-04T12:08:44.4798964Z [rank1]:E1204 12:06:23.766000 427795 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.4799685Z [rank1]:E1204 12:06:23.766000 427795 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.4800943Z [rank1]:E1204 12:06:23.766000 427795 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_ddp_parity_cuda 2025-12-04T12:08:44.4801917Z [rank1]:E1204 12:06:23.766000 427795 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.4802668Z [rank1]:E1204 12:06:23.766000 427795 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.4803522Z [rank1]:E1204 12:06:23.766000 427795 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:08:44.4804020Z dist init r=1, world=4 2025-12-04T12:08:44.4804439Z [rank0]:E1204 12:06:23.769000 427794 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.4805140Z [rank0]:E1204 12:06:23.769000 427794 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.4806141Z [rank0]:E1204 12:06:23.769000 427794 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.4807161Z [rank0]:E1204 12:06:23.769000 427794 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.4808177Z [rank0]:E1204 12:06:23.769000 427794 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.4809100Z [rank0]:E1204 12:06:23.769000 427794 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.4810014Z [rank0]:E1204 12:06:23.769000 427794 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4811019Z [rank0]:E1204 12:06:23.769000 427794 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.4811979Z [rank0]:E1204 12:06:23.769000 427794 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4812938Z [rank0]:E1204 12:06:23.769000 427794 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.4813892Z [rank0]:E1204 12:06:23.769000 427794 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.4814829Z [rank0]:E1204 12:06:23.769000 427794 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.4815769Z [rank0]:E1204 12:06:23.769000 427794 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.4816731Z [rank0]:E1204 12:06:23.769000 427794 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.4818025Z [rank0]:E1204 12:06:23.769000 427794 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_ddp_parity_cuda! Caching allocator allocated memory was 512 and is now reported as 1997312 on device 0. CUDA driver allocated memory was 2459959296 and is now 4156555264. 2025-12-04T12:08:44.4819231Z [rank0]:E1204 12:06:23.769000 427794 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.4820006Z [rank0]:E1204 12:06:23.769000 427794 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.4821212Z [rank0]:E1204 12:06:23.769000 427794 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_ddp_parity_cuda 2025-12-04T12:08:44.4822192Z [rank0]:E1204 12:06:23.769000 427794 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.4822946Z [rank0]:E1204 12:06:23.769000 427794 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.4823803Z [rank0]:E1204 12:06:23.769000 427794 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:08:44.4824305Z dist init r=0, world=4 2025-12-04T12:08:44.4824725Z [rank3]:E1204 12:06:23.781000 427797 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.4825423Z [rank3]:E1204 12:06:23.781000 427797 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.4826469Z [rank3]:E1204 12:06:23.781000 427797 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.4827487Z [rank3]:E1204 12:06:23.781000 427797 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.4828477Z [rank3]:E1204 12:06:23.781000 427797 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.4829472Z [rank3]:E1204 12:06:23.781000 427797 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.4830388Z [rank3]:E1204 12:06:23.781000 427797 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4831382Z [rank3]:E1204 12:06:23.781000 427797 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.4832335Z [rank3]:E1204 12:06:23.781000 427797 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4833284Z [rank3]:E1204 12:06:23.781000 427797 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.4834237Z [rank3]:E1204 12:06:23.781000 427797 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.4835162Z [rank3]:E1204 12:06:23.781000 427797 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.4836098Z [rank3]:E1204 12:06:23.781000 427797 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.4837053Z [rank3]:E1204 12:06:23.781000 427797 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.4838395Z [rank3]:E1204 12:06:23.781000 427797 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_ddp_parity_cuda! Caching allocator allocated memory was 512 and is now reported as 1997312 on device 3. CUDA driver allocated memory was 2250244096 and is now 3946840064. 2025-12-04T12:08:44.4839594Z [rank3]:E1204 12:06:23.781000 427797 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.4840308Z [rank3]:E1204 12:06:23.781000 427797 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.4841500Z [rank3]:E1204 12:06:23.781000 427797 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_ddp_parity_cuda 2025-12-04T12:08:44.4842477Z [rank3]:E1204 12:06:23.781000 427797 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.4861379Z [rank3]:E1204 12:06:23.781000 427797 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.4862251Z [rank3]:E1204 12:06:23.781000 427797 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:08:44.4862765Z dist init r=3, world=4 2025-12-04T12:08:44.4863697Z [rank0]:[W1204 12:06:24.945531479 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:08:44.4864586Z FAILED [20.6605s] [100%] 2025-12-04T12:08:44.4864727Z 2025-12-04T12:08:44.4864859Z =================================== FAILURES =================================== 2025-12-04T12:08:44.4865242Z __________________ TestClipGradNormCUDA.test_ddp_parity_cuda ___________________ 2025-12-04T12:08:44.4865595Z Traceback (most recent call last): 2025-12-04T12:08:44.4866119Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:08:44.4866629Z self._join_processes(fn) 2025-12-04T12:08:44.4867138Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:08:44.4867688Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:08:44.4868241Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:08:44.4868781Z raise RuntimeError(error) 2025-12-04T12:08:44.4869099Z RuntimeError: Process 1 exited with error code 10 and exception: 2025-12-04T12:08:44.4869439Z Traceback (most recent call last): 2025-12-04T12:08:44.4869940Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.4870443Z getattr(self, test_name)() 2025-12-04T12:08:44.4870982Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.4871471Z fn() 2025-12-04T12:08:44.4871892Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4872383Z method(*args, **kwargs) 2025-12-04T12:08:44.4872848Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4873331Z method(*args, **kwargs) 2025-12-04T12:08:44.4873789Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.4874258Z with policy(): 2025-12-04T12:08:44.4874702Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.4875183Z raise RuntimeError(msg) 2025-12-04T12:08:44.4876042Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_ddp_parity_cuda! Caching allocator allocated memory was 512 and is now reported as 1997312 on device 1. CUDA driver allocated memory was 2317352960 and is now 4013948928. 2025-12-04T12:08:44.4876762Z 2025-12-04T12:08:44.4876924Z To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.4877574Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_ddp_parity_cuda 2025-12-04T12:08:44.4878060Z 2025-12-04T12:08:44.4878249Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.4878511Z 2025-12-04T12:08:44.4878634Z Process 2 exited with error code 10 and exception: 2025-12-04T12:08:44.4878931Z Traceback (most recent call last): 2025-12-04T12:08:44.4879439Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.4879943Z getattr(self, test_name)() 2025-12-04T12:08:44.4880426Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.4880986Z fn() 2025-12-04T12:08:44.4881403Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4881882Z method(*args, **kwargs) 2025-12-04T12:08:44.4882389Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4882868Z method(*args, **kwargs) 2025-12-04T12:08:44.4883322Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.4883795Z with policy(): 2025-12-04T12:08:44.4884243Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.4884726Z raise RuntimeError(msg) 2025-12-04T12:08:44.4885521Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_ddp_parity_cuda! Caching allocator allocated memory was 512 and is now reported as 1929728 on device 2. CUDA driver allocated memory was 2300575744 and is now 3997171712. 2025-12-04T12:08:44.4886241Z 2025-12-04T12:08:44.4886395Z To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.4887035Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_ddp_parity_cuda 2025-12-04T12:08:44.4887521Z 2025-12-04T12:08:44.4887703Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.4887964Z 2025-12-04T12:08:44.4887968Z 2025-12-04T12:08:44.4888135Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:08:44.4888561Z Process 1 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:08:44.4889357Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_clip_grad_norm/distributed.fsdp.test_fsdp_clip_grad_norm-cd646c22ce7d8455.xml - 2025-12-04T12:08:44.4890099Z =========================== short test summary info ============================ 2025-12-04T12:08:44.4890811Z FAILED [20.6605s] distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_ddp_parity_cuda - RuntimeError: Process 1 exited with error code 10 and exception: 2025-12-04T12:08:44.4891430Z Traceback (most recent call last): 2025-12-04T12:08:44.4891938Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.4892440Z getattr(self, test_name)() 2025-12-04T12:08:44.4892987Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.4893470Z fn() 2025-12-04T12:08:44.4893888Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4894365Z method(*args, **kwargs) 2025-12-04T12:08:44.4894823Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4895298Z method(*args, **kwargs) 2025-12-04T12:08:44.4895754Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.4896223Z with policy(): 2025-12-04T12:08:44.4896661Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.4897139Z raise RuntimeError(msg) 2025-12-04T12:08:44.4897934Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_ddp_parity_cuda! Caching allocator allocated memory was 512 and is now reported as 1997312 on device 1. CUDA driver allocated memory was 2317352960 and is now 4013948928. 2025-12-04T12:08:44.4898649Z 2025-12-04T12:08:44.4898809Z To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.4899476Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_ddp_parity_cuda 2025-12-04T12:08:44.4899958Z 2025-12-04T12:08:44.4900174Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.4900429Z 2025-12-04T12:08:44.4900556Z Process 2 exited with error code 10 and exception: 2025-12-04T12:08:44.4900912Z Traceback (most recent call last): 2025-12-04T12:08:44.4901407Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.4901907Z getattr(self, test_name)() 2025-12-04T12:08:44.4902392Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.4902873Z fn() 2025-12-04T12:08:44.4903289Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4903764Z method(*args, **kwargs) 2025-12-04T12:08:44.4904219Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4904699Z method(*args, **kwargs) 2025-12-04T12:08:44.4905148Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.4905613Z with policy(): 2025-12-04T12:08:44.4906050Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.4906528Z raise RuntimeError(msg) 2025-12-04T12:08:44.4907323Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_ddp_parity_cuda! Caching allocator allocated memory was 512 and is now reported as 1929728 on device 2. CUDA driver allocated memory was 2300575744 and is now 3997171712. 2025-12-04T12:08:44.4908036Z 2025-12-04T12:08:44.4908195Z To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.4908828Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_ddp_parity_cuda 2025-12-04T12:08:44.4909311Z 2025-12-04T12:08:44.4909498Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.4909894Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:08:44.4910245Z ======================= 1 failed, 3 deselected in 20.68s ======================= 2025-12-04T12:08:44.4910532Z Got exit code 1 2025-12-04T12:08:44.4910846Z Retrying single test... 2025-12-04T12:08:44.4911414Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_clip_grad_norm/distributed.fsdp.test_fsdp_clip_grad_norm-8bb26962be86898a.xml 2025-12-04T12:08:44.4912041Z ============================= test session starts ============================== 2025-12-04T12:08:44.4912481Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:08:44.4912874Z cachedir: .pytest_cache 2025-12-04T12:08:44.4913338Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:08:44.4913833Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:08:44.4914080Z configfile: pytest.ini 2025-12-04T12:08:44.4914550Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:08:44.4915112Z collecting ... collected 4 items / 3 deselected / 1 selected 2025-12-04T12:08:44.4915727Z stepcurrent: skipping 0 already run items. Running only test/distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_ddp_parity_cuda 2025-12-04T12:08:44.4916286Z Running 1 items in this shard 2025-12-04T12:08:44.4916464Z 2025-12-04T12:08:44.4917054Z distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_ddp_parity_cuda I1204 12:06:28.592000 428635 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 428704 2025-12-04T12:08:44.4918046Z I1204 12:06:28.592000 428635 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 428705 2025-12-04T12:08:44.4918756Z I1204 12:06:28.593000 428635 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 428706 2025-12-04T12:08:44.4919461Z I1204 12:06:28.594000 428635 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 428707 2025-12-04T12:08:44.4920654Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:08:44.4921578Z self.encoder = TransformerEncoder( 2025-12-04T12:08:44.4922485Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:08:44.4923386Z self.encoder = TransformerEncoder( 2025-12-04T12:08:44.4924287Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:08:44.4925187Z self.encoder = TransformerEncoder( 2025-12-04T12:08:44.4926077Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/transformer.py:144: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-12-04T12:08:44.4926932Z self.encoder = TransformerEncoder( 2025-12-04T12:08:44.4927500Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.4928092Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.4928709Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.4929298Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.4929883Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.4930466Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.4931083Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.4931681Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.4932078Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/c10d_logger.py:83: UserWarning: barrier(): using the device under current context. You can specify `device_id` in `init_process_group` to mute this warning. 2025-12-04T12:08:44.4932465Z return func(*args, **kwargs) 2025-12-04T12:08:44.4932821Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/wrap.py:91: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4933185Z return fsdp_fn(module, **kwargs) 2025-12-04T12:08:44.4933542Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/wrap.py:91: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4933904Z return fsdp_fn(module, **kwargs) 2025-12-04T12:08:44.4934258Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/wrap.py:91: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4934616Z return fsdp_fn(module, **kwargs) 2025-12-04T12:08:44.4934967Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/wrap.py:91: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4935327Z return fsdp_fn(module, **kwargs) 2025-12-04T12:08:44.4935694Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_fsdp.py:395: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4936060Z fsdp_model = FSDP( 2025-12-04T12:08:44.4936419Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_fsdp.py:395: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4936782Z fsdp_model = FSDP( 2025-12-04T12:08:44.4937128Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_fsdp.py:395: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4937486Z fsdp_model = FSDP( 2025-12-04T12:08:44.4937856Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_fsdp.py:395: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4938216Z fsdp_model = FSDP( 2025-12-04T12:08:44.4939550Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py:865: UserWarning: The AccumulateGrad node's stream does not match the stream of the node that produced the incoming gradient. This may incur unnecessary synchronization and break CUDA graph capture if the AccumulateGrad node's stream is the default stream. This mismatch is caused by an AccumulateGrad node created prior to the current iteration being kept alive. This can happen if the autograd graph is still being kept alive by tensors such as the loss, or if you are using DDP, which will stash a reference to the node. To resolve the mismatch, delete all references to the autograd graph or ensure that DDP initialization is performed under the same stream as subsequent forwards. If the mismatch is intentional, you can use torch.autograd.graph.set_warn_on_accumulate_grad_stream_mismatch(False) to suppress this warning. (Triggered internally at /var/lib/jenkins/workspace/torch/csrc/autograd/input_buffer.cpp:240.) 2025-12-04T12:08:44.4940993Z return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2025-12-04T12:08:44.4942427Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py:865: UserWarning: The AccumulateGrad node's stream does not match the stream of the node that produced the incoming gradient. This may incur unnecessary synchronization and break CUDA graph capture if the AccumulateGrad node's stream is the default stream. This mismatch is caused by an AccumulateGrad node created prior to the current iteration being kept alive. This can happen if the autograd graph is still being kept alive by tensors such as the loss, or if you are using DDP, which will stash a reference to the node. To resolve the mismatch, delete all references to the autograd graph or ensure that DDP initialization is performed under the same stream as subsequent forwards. If the mismatch is intentional, you can use torch.autograd.graph.set_warn_on_accumulate_grad_stream_mismatch(False) to suppress this warning. (Triggered internally at /var/lib/jenkins/workspace/torch/csrc/autograd/input_buffer.cpp:240.) 2025-12-04T12:08:44.4943850Z return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2025-12-04T12:08:44.4945252Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py:865: UserWarning: The AccumulateGrad node's stream does not match the stream of the node that produced the incoming gradient. This may incur unnecessary synchronization and break CUDA graph capture if the AccumulateGrad node's stream is the default stream. This mismatch is caused by an AccumulateGrad node created prior to the current iteration being kept alive. This can happen if the autograd graph is still being kept alive by tensors such as the loss, or if you are using DDP, which will stash a reference to the node. To resolve the mismatch, delete all references to the autograd graph or ensure that DDP initialization is performed under the same stream as subsequent forwards. If the mismatch is intentional, you can use torch.autograd.graph.set_warn_on_accumulate_grad_stream_mismatch(False) to suppress this warning. (Triggered internally at /var/lib/jenkins/workspace/torch/csrc/autograd/input_buffer.cpp:240.) 2025-12-04T12:08:44.4946647Z return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2025-12-04T12:08:44.4948078Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py:865: UserWarning: The AccumulateGrad node's stream does not match the stream of the node that produced the incoming gradient. This may incur unnecessary synchronization and break CUDA graph capture if the AccumulateGrad node's stream is the default stream. This mismatch is caused by an AccumulateGrad node created prior to the current iteration being kept alive. This can happen if the autograd graph is still being kept alive by tensors such as the loss, or if you are using DDP, which will stash a reference to the node. To resolve the mismatch, delete all references to the autograd graph or ensure that DDP initialization is performed under the same stream as subsequent forwards. If the mismatch is intentional, you can use torch.autograd.graph.set_warn_on_accumulate_grad_stream_mismatch(False) to suppress this warning. (Triggered internally at /var/lib/jenkins/workspace/torch/csrc/autograd/input_buffer.cpp:240.) 2025-12-04T12:08:44.4949479Z return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2025-12-04T12:08:44.4949913Z /var/lib/jenkins/pytorch/test/distributed/fsdp/test_fsdp_clip_grad_norm.py:123: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4950274Z fsdp_model.transformer.encoder = FSDP( 2025-12-04T12:08:44.4950666Z /var/lib/jenkins/pytorch/test/distributed/fsdp/test_fsdp_clip_grad_norm.py:123: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4951024Z fsdp_model.transformer.encoder = FSDP( 2025-12-04T12:08:44.4951393Z /var/lib/jenkins/pytorch/test/distributed/fsdp/test_fsdp_clip_grad_norm.py:123: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4951764Z fsdp_model.transformer.encoder = FSDP( 2025-12-04T12:08:44.4952116Z /var/lib/jenkins/pytorch/test/distributed/fsdp/test_fsdp_clip_grad_norm.py:123: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.4952469Z fsdp_model.transformer.encoder = FSDP( 2025-12-04T12:08:44.4952702Z [rank3]:E1204 12:06:46.883000 428707 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.4953050Z [rank3]:E1204 12:06:46.883000 428707 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.4953539Z [rank3]:E1204 12:06:46.883000 428707 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.4954024Z [rank3]:E1204 12:06:46.883000 428707 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.4954505Z [rank3]:E1204 12:06:46.883000 428707 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.4954956Z [rank3]:E1204 12:06:46.883000 428707 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.4955402Z [rank3]:E1204 12:06:46.883000 428707 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4955869Z [rank3]:E1204 12:06:46.883000 428707 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.4956328Z [rank3]:E1204 12:06:46.883000 428707 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4956785Z [rank3]:E1204 12:06:46.883000 428707 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.4957280Z [rank3]:E1204 12:06:46.883000 428707 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.4957727Z [rank3]:E1204 12:06:46.883000 428707 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.4958176Z [rank3]:E1204 12:06:46.883000 428707 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.4958634Z [rank3]:E1204 12:06:46.883000 428707 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.4959258Z [rank3]:E1204 12:06:46.883000 428707 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_ddp_parity_cuda! Caching allocator allocated memory was 512 and is now reported as 1997312 on device 3. CUDA driver allocated memory was 2250244096 and is now 3946840064. 2025-12-04T12:08:44.4959837Z [rank3]:E1204 12:06:46.883000 428707 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.4960182Z [rank3]:E1204 12:06:46.883000 428707 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.4960782Z [rank3]:E1204 12:06:46.883000 428707 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_ddp_parity_cuda 2025-12-04T12:08:44.4961274Z [rank3]:E1204 12:06:46.883000 428707 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.4961635Z [rank3]:E1204 12:06:46.883000 428707 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.4962046Z [rank3]:E1204 12:06:46.883000 428707 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:08:44.4962284Z dist init r=3, world=4 2025-12-04T12:08:44.4962487Z [rank2]:E1204 12:06:46.950000 428706 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.4962821Z [rank2]:E1204 12:06:46.950000 428706 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.4963299Z [rank2]:E1204 12:06:46.950000 428706 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.4963769Z [rank2]:E1204 12:06:46.950000 428706 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.4964239Z [rank2]:E1204 12:06:46.950000 428706 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.4964679Z [rank2]:E1204 12:06:46.950000 428706 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.4965114Z [rank2]:E1204 12:06:46.950000 428706 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4965571Z [rank2]:E1204 12:06:46.950000 428706 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.4966028Z [rank2]:E1204 12:06:46.950000 428706 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4966513Z [rank2]:E1204 12:06:46.950000 428706 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.4966969Z [rank2]:E1204 12:06:46.950000 428706 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.4967412Z [rank2]:E1204 12:06:46.950000 428706 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.4967862Z [rank2]:E1204 12:06:46.950000 428706 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.4968318Z [rank2]:E1204 12:06:46.950000 428706 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.4968941Z [rank2]:E1204 12:06:46.950000 428706 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_ddp_parity_cuda! Caching allocator allocated memory was 512 and is now reported as 1929728 on device 2. CUDA driver allocated memory was 2300575744 and is now 3997171712. 2025-12-04T12:08:44.4969516Z [rank2]:E1204 12:06:46.950000 428706 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.4969872Z [rank2]:E1204 12:06:46.950000 428706 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.4970434Z [rank2]:E1204 12:06:46.950000 428706 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_ddp_parity_cuda 2025-12-04T12:08:44.4970938Z [rank2]:E1204 12:06:46.950000 428706 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.4971300Z [rank2]:E1204 12:06:46.950000 428706 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.4971706Z [rank2]:E1204 12:06:46.950000 428706 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:08:44.4971943Z dist init r=2, world=4 2025-12-04T12:08:44.4972142Z [rank1]:E1204 12:06:46.972000 428705 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.4972472Z [rank1]:E1204 12:06:46.972000 428705 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.4972949Z [rank1]:E1204 12:06:46.972000 428705 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.4973420Z [rank1]:E1204 12:06:46.972000 428705 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.4973890Z [rank1]:E1204 12:06:46.972000 428705 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.4974333Z [rank1]:E1204 12:06:46.972000 428705 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.4974767Z [rank1]:E1204 12:06:46.972000 428705 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4975222Z [rank1]:E1204 12:06:46.972000 428705 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.4975709Z [rank1]:E1204 12:06:46.972000 428705 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4976164Z [rank1]:E1204 12:06:46.972000 428705 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.4976622Z [rank1]:E1204 12:06:46.972000 428705 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.4977070Z [rank1]:E1204 12:06:46.972000 428705 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.4977517Z [rank1]:E1204 12:06:46.972000 428705 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.4977976Z [rank1]:E1204 12:06:46.972000 428705 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.4978594Z [rank1]:E1204 12:06:46.972000 428705 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_ddp_parity_cuda! Caching allocator allocated memory was 512 and is now reported as 1929728 on device 1. CUDA driver allocated memory was 2317352960 and is now 4013948928. 2025-12-04T12:08:44.4979196Z [rank1]:E1204 12:06:46.972000 428705 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.4979538Z [rank1]:E1204 12:06:46.972000 428705 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.4980086Z [rank1]:E1204 12:06:46.972000 428705 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_ddp_parity_cuda 2025-12-04T12:08:44.4980551Z [rank1]:E1204 12:06:46.972000 428705 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.4980948Z [rank1]:E1204 12:06:46.972000 428705 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.4981355Z [rank1]:E1204 12:06:46.972000 428705 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:08:44.4981592Z dist init r=1, world=4 2025-12-04T12:08:44.4981791Z [rank0]:E1204 12:06:46.973000 428704 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.4982125Z [rank0]:E1204 12:06:46.973000 428704 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.4982606Z [rank0]:E1204 12:06:46.973000 428704 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.4983077Z [rank0]:E1204 12:06:46.973000 428704 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.4983547Z [rank0]:E1204 12:06:46.973000 428704 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.4983988Z [rank0]:E1204 12:06:46.973000 428704 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.4984441Z [rank0]:E1204 12:06:46.973000 428704 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4984922Z [rank0]:E1204 12:06:46.973000 428704 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.4985379Z [rank0]:E1204 12:06:46.973000 428704 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4985834Z [rank0]:E1204 12:06:46.973000 428704 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.4986291Z [rank0]:E1204 12:06:46.973000 428704 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.4986735Z [rank0]:E1204 12:06:46.973000 428704 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.4987184Z [rank0]:E1204 12:06:46.973000 428704 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.4987640Z [rank0]:E1204 12:06:46.973000 428704 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.4988269Z [rank0]:E1204 12:06:46.973000 428704 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_ddp_parity_cuda! Caching allocator allocated memory was 512 and is now reported as 1997312 on device 0. CUDA driver allocated memory was 2459959296 and is now 4156555264. 2025-12-04T12:08:44.4988862Z [rank0]:E1204 12:06:46.973000 428704 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.4989208Z [rank0]:E1204 12:06:46.973000 428704 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.4989755Z [rank0]:E1204 12:06:46.973000 428704 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_ddp_parity_cuda 2025-12-04T12:08:44.4990221Z [rank0]:E1204 12:06:46.973000 428704 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.4990580Z [rank0]:E1204 12:06:46.973000 428704 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.4991020Z [rank0]:E1204 12:06:46.973000 428704 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:08:44.4991256Z dist init r=0, world=4 2025-12-04T12:08:44.4991650Z [rank0]:[W1204 12:06:47.298839160 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:08:44.4992054Z FAILED [20.4716s] [100%] 2025-12-04T12:08:44.4992120Z 2025-12-04T12:08:44.4992176Z =================================== FAILURES =================================== 2025-12-04T12:08:44.4992356Z __________________ TestClipGradNormCUDA.test_ddp_parity_cuda ___________________ 2025-12-04T12:08:44.4992522Z Traceback (most recent call last): 2025-12-04T12:08:44.4992761Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:08:44.4992999Z self._join_processes(fn) 2025-12-04T12:08:44.4993238Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:08:44.4993496Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:08:44.4993790Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:08:44.4994044Z raise RuntimeError(error) 2025-12-04T12:08:44.4994190Z RuntimeError: Process 3 exited with error code 10 and exception: 2025-12-04T12:08:44.4994348Z Traceback (most recent call last): 2025-12-04T12:08:44.4994584Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.4994820Z getattr(self, test_name)() 2025-12-04T12:08:44.4995044Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.4995269Z fn() 2025-12-04T12:08:44.4995467Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4995693Z method(*args, **kwargs) 2025-12-04T12:08:44.4995915Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.4996141Z method(*args, **kwargs) 2025-12-04T12:08:44.4996354Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.4996593Z with policy(): 2025-12-04T12:08:44.4996799Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.4997043Z raise RuntimeError(msg) 2025-12-04T12:08:44.4997421Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_ddp_parity_cuda! Caching allocator allocated memory was 512 and is now reported as 1997312 on device 3. CUDA driver allocated memory was 2250244096 and is now 3946840064. 2025-12-04T12:08:44.4997764Z 2025-12-04T12:08:44.4997842Z To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.4998144Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_ddp_parity_cuda 2025-12-04T12:08:44.4998378Z 2025-12-04T12:08:44.4998464Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.4998589Z 2025-12-04T12:08:44.4998591Z 2025-12-04T12:08:44.4998668Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:08:44.4998867Z Process 3 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:08:44.4999241Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_clip_grad_norm/distributed.fsdp.test_fsdp_clip_grad_norm-8bb26962be86898a.xml - 2025-12-04T12:08:44.4999588Z =========================== short test summary info ============================ 2025-12-04T12:08:44.4999906Z FAILED [20.4716s] distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_ddp_parity_cuda - RuntimeError: Process 3 exited with error code 10 and exception: 2025-12-04T12:08:44.5000197Z Traceback (most recent call last): 2025-12-04T12:08:44.5000436Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5000709Z getattr(self, test_name)() 2025-12-04T12:08:44.5000937Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5001166Z fn() 2025-12-04T12:08:44.5001360Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5001587Z method(*args, **kwargs) 2025-12-04T12:08:44.5001802Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5002026Z method(*args, **kwargs) 2025-12-04T12:08:44.5002269Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5002491Z with policy(): 2025-12-04T12:08:44.5002697Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5002925Z raise RuntimeError(msg) 2025-12-04T12:08:44.5003303Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_ddp_parity_cuda! Caching allocator allocated memory was 512 and is now reported as 1997312 on device 3. CUDA driver allocated memory was 2250244096 and is now 3946840064. 2025-12-04T12:08:44.5003645Z 2025-12-04T12:08:44.5003719Z To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5004022Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_ddp_parity_cuda 2025-12-04T12:08:44.5004259Z 2025-12-04T12:08:44.5004346Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5004532Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:08:44.5004694Z ======================= 1 failed, 3 deselected in 20.48s ======================= 2025-12-04T12:08:44.5004844Z Got exit code 1 2025-12-04T12:08:44.5005044Z FAILED CONSISTENTLY: test/distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_ddp_parity_cuda 2025-12-04T12:08:44.5005365Z Test failed consistently, continuing with the rest of the tests due to continue-through-error being set 2025-12-04T12:08:44.5005739Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_clip_grad_norm/distributed.fsdp.test_fsdp_clip_grad_norm-47b9eab10e2da3f4.xml 2025-12-04T12:08:44.5006043Z ============================= test session starts ============================== 2025-12-04T12:08:44.5006253Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:08:44.5006438Z cachedir: .pytest_cache 2025-12-04T12:08:44.5006657Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:08:44.5006892Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:08:44.5007007Z configfile: pytest.ini 2025-12-04T12:08:44.5007226Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:08:44.5007489Z collecting ... collected 4 items / 1 deselected / 3 selected 2025-12-04T12:08:44.5007646Z stepcurrent: skipping 1 already run items. 2025-12-04T12:08:44.5007771Z Running 3 items in this shard 2025-12-04T12:08:44.5007839Z 2025-12-04T12:08:44.5008134Z distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_low_precision_grads_cuda I1204 12:06:52.074000 429545 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 429614 2025-12-04T12:08:44.5008606Z I1204 12:06:52.075000 429545 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 429615 2025-12-04T12:08:44.5008942Z I1204 12:06:52.075000 429545 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 429616 2025-12-04T12:08:44.5009278Z I1204 12:06:52.076000 429545 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 429617 2025-12-04T12:08:44.5009953Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5010528Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5011181Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5011762Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5012337Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5012905Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5013470Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5014057Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5014453Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/c10d_logger.py:83: UserWarning: barrier(): using the device under current context. You can specify `device_id` in `init_process_group` to mute this warning. 2025-12-04T12:08:44.5014814Z return func(*args, **kwargs) 2025-12-04T12:08:44.5015172Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_fsdp.py:426: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.5015542Z return FSDP(layer, group, **fsdp_kwargs) 2025-12-04T12:08:44.5015908Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_fsdp.py:426: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.5016273Z return FSDP(layer, group, **fsdp_kwargs) 2025-12-04T12:08:44.5016634Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_fsdp.py:426: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.5016996Z return FSDP(layer, group, **fsdp_kwargs) 2025-12-04T12:08:44.5017357Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_fsdp.py:426: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.5017732Z return FSDP(layer, group, **fsdp_kwargs) 2025-12-04T12:08:44.5018083Z /var/lib/jenkins/pytorch/test/distributed/fsdp/test_fsdp_clip_grad_norm.py:275: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.5018430Z fsdp_model = FSDP( 2025-12-04T12:08:44.5018758Z /var/lib/jenkins/pytorch/test/distributed/fsdp/test_fsdp_clip_grad_norm.py:275: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.5019104Z fsdp_model = FSDP( 2025-12-04T12:08:44.5019433Z /var/lib/jenkins/pytorch/test/distributed/fsdp/test_fsdp_clip_grad_norm.py:275: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.5019772Z fsdp_model = FSDP( 2025-12-04T12:08:44.5020121Z /var/lib/jenkins/pytorch/test/distributed/fsdp/test_fsdp_clip_grad_norm.py:275: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.5020463Z fsdp_model = FSDP( 2025-12-04T12:08:44.5020710Z [rank2]:E1204 12:07:00.156000 429616 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5021056Z [rank2]:E1204 12:07:00.156000 429616 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5021545Z [rank2]:E1204 12:07:00.156000 429616 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5022025Z [rank2]:E1204 12:07:00.156000 429616 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5022502Z [rank2]:E1204 12:07:00.156000 429616 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5022953Z [rank2]:E1204 12:07:00.156000 429616 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5023407Z [rank2]:E1204 12:07:00.156000 429616 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5023892Z [rank2]:E1204 12:07:00.156000 429616 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5024357Z [rank2]:E1204 12:07:00.156000 429616 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5024823Z [rank2]:E1204 12:07:00.156000 429616 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5025286Z [rank2]:E1204 12:07:00.156000 429616 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5025735Z [rank2]:E1204 12:07:00.156000 429616 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5026194Z [rank2]:E1204 12:07:00.156000 429616 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5026659Z [rank2]:E1204 12:07:00.156000 429616 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5027297Z [rank2]:E1204 12:07:00.156000 429616 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_low_precision_grads_cuda! Caching allocator allocated memory was 512 and is now reported as 92672 on device 2. CUDA driver allocated memory was 2300575744 and is now 3466592256. 2025-12-04T12:08:44.5027893Z [rank2]:E1204 12:07:00.156000 429616 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5028244Z [rank2]:E1204 12:07:00.156000 429616 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5028815Z [rank2]:E1204 12:07:00.156000 429616 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_low_precision_grads_cuda 2025-12-04T12:08:44.5029330Z [rank2]:E1204 12:07:00.156000 429616 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5029697Z [rank2]:E1204 12:07:00.156000 429616 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5030112Z [rank2]:E1204 12:07:00.156000 429616 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:08:44.5030354Z dist init r=2, world=4 2025-12-04T12:08:44.5030562Z [rank0]:E1204 12:07:00.183000 429614 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5030931Z [rank0]:E1204 12:07:00.183000 429614 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5031416Z [rank0]:E1204 12:07:00.183000 429614 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5031892Z [rank0]:E1204 12:07:00.183000 429614 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5032366Z [rank0]:E1204 12:07:00.183000 429614 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5032831Z [rank0]:E1204 12:07:00.183000 429614 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5033284Z [rank0]:E1204 12:07:00.183000 429614 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5033746Z [rank0]:E1204 12:07:00.183000 429614 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5034210Z [rank0]:E1204 12:07:00.183000 429614 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5034674Z [rank0]:E1204 12:07:00.183000 429614 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5035143Z [rank0]:E1204 12:07:00.183000 429614 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5035597Z [rank0]:E1204 12:07:00.183000 429614 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5036049Z [rank0]:E1204 12:07:00.183000 429614 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5036513Z [rank0]:E1204 12:07:00.183000 429614 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5037147Z [rank0]:E1204 12:07:00.183000 429614 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_low_precision_grads_cuda! Caching allocator allocated memory was 512 and is now reported as 92672 on device 0. CUDA driver allocated memory was 2459959296 and is now 3625975808. 2025-12-04T12:08:44.5037740Z [rank0]:E1204 12:07:00.183000 429614 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5038089Z [rank0]:E1204 12:07:00.183000 429614 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5038686Z [rank0]:E1204 12:07:00.183000 429614 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_low_precision_grads_cuda 2025-12-04T12:08:44.5039173Z [rank0]:E1204 12:07:00.183000 429614 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5039539Z [rank0]:E1204 12:07:00.183000 429614 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5039953Z [rank0]:E1204 12:07:00.183000 429614 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:08:44.5040292Z [rank3]:E1204 12:07:00.183000 429617 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5040679Z [rank3]:E1204 12:07:00.183000 429617 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5041164Z [rank3]:E1204 12:07:00.183000 429617 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5041653Z [rank3]:E1204 12:07:00.183000 429617 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5042128Z [rank3]:E1204 12:07:00.183000 429617 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5042595Z [rank3]:E1204 12:07:00.183000 429617 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5043036Z [rank3]:E1204 12:07:00.183000 429617 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5043496Z [rank3]:E1204 12:07:00.183000 429617 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5043963Z [rank3]:E1204 12:07:00.183000 429617 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5044425Z [rank3]:E1204 12:07:00.183000 429617 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5044891Z [rank3]:E1204 12:07:00.183000 429617 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5045345Z [rank3]:E1204 12:07:00.183000 429617 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5045798Z [rank3]:E1204 12:07:00.183000 429617 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5046266Z [rank3]:E1204 12:07:00.183000 429617 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5046898Z [rank3]:E1204 12:07:00.183000 429617 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_low_precision_grads_cuda! Caching allocator allocated memory was 512 and is now reported as 92672 on device 3. CUDA driver allocated memory was 2250244096 and is now 3416260608. 2025-12-04T12:08:44.5047488Z [rank3]:E1204 12:07:00.183000 429617 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5047867Z [rank3]:E1204 12:07:00.183000 429617 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5048433Z [rank3]:E1204 12:07:00.183000 429617 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_low_precision_grads_cuda 2025-12-04T12:08:44.5048920Z [rank3]:E1204 12:07:00.183000 429617 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5049286Z [rank3]:E1204 12:07:00.183000 429617 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5049695Z [rank3]:E1204 12:07:00.183000 429617 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:08:44.5049936Z dist init r=0, world=4 2025-12-04T12:08:44.5050041Z dist init r=3, world=4 2025-12-04T12:08:44.5050242Z [rank1]:E1204 12:07:00.218000 429615 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5050579Z [rank1]:E1204 12:07:00.218000 429615 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5051112Z [rank1]:E1204 12:07:00.218000 429615 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5051605Z [rank1]:E1204 12:07:00.218000 429615 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5052080Z [rank1]:E1204 12:07:00.218000 429615 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5052536Z [rank1]:E1204 12:07:00.218000 429615 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5052975Z [rank1]:E1204 12:07:00.218000 429615 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5053437Z [rank1]:E1204 12:07:00.218000 429615 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5053906Z [rank1]:E1204 12:07:00.218000 429615 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5054367Z [rank1]:E1204 12:07:00.218000 429615 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5054830Z [rank1]:E1204 12:07:00.218000 429615 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5055280Z [rank1]:E1204 12:07:00.218000 429615 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5055730Z [rank1]:E1204 12:07:00.218000 429615 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5056193Z [rank1]:E1204 12:07:00.218000 429615 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5056854Z [rank1]:E1204 12:07:00.218000 429615 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_low_precision_grads_cuda! Caching allocator allocated memory was 512 and is now reported as 92672 on device 1. CUDA driver allocated memory was 2317352960 and is now 3483369472. 2025-12-04T12:08:44.5057446Z [rank1]:E1204 12:07:00.218000 429615 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5057797Z [rank1]:E1204 12:07:00.218000 429615 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5058364Z [rank1]:E1204 12:07:00.218000 429615 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_low_precision_grads_cuda 2025-12-04T12:08:44.5058851Z [rank1]:E1204 12:07:00.218000 429615 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5059218Z [rank1]:E1204 12:07:00.218000 429615 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5059626Z [rank1]:E1204 12:07:00.218000 429615 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:08:44.5059882Z dist init r=1, world=4 2025-12-04T12:08:44.5060279Z [rank0]:[W1204 12:07:00.310027840 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:08:44.5060749Z FAILED [9.8227s] [ 33%] 2025-12-04T12:08:44.5060813Z 2025-12-04T12:08:44.5060875Z =================================== FAILURES =================================== 2025-12-04T12:08:44.5061063Z ______________ TestClipGradNormCUDA.test_low_precision_grads_cuda ______________ 2025-12-04T12:08:44.5061238Z Traceback (most recent call last): 2025-12-04T12:08:44.5061490Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:08:44.5061735Z self._join_processes(fn) 2025-12-04T12:08:44.5061981Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:08:44.5062246Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:08:44.5062514Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:08:44.5062772Z raise RuntimeError(error) 2025-12-04T12:08:44.5062925Z RuntimeError: Process 2 exited with error code 10 and exception: 2025-12-04T12:08:44.5063088Z Traceback (most recent call last): 2025-12-04T12:08:44.5063325Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5063565Z getattr(self, test_name)() 2025-12-04T12:08:44.5063799Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5064030Z fn() 2025-12-04T12:08:44.5064229Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5064460Z method(*args, **kwargs) 2025-12-04T12:08:44.5064678Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5064909Z method(*args, **kwargs) 2025-12-04T12:08:44.5065125Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5065348Z with policy(): 2025-12-04T12:08:44.5065557Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5065787Z raise RuntimeError(msg) 2025-12-04T12:08:44.5066204Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_low_precision_grads_cuda! Caching allocator allocated memory was 512 and is now reported as 92672 on device 2. CUDA driver allocated memory was 2300575744 and is now 3466592256. 2025-12-04T12:08:44.5066560Z 2025-12-04T12:08:44.5066637Z To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5066959Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_low_precision_grads_cuda 2025-12-04T12:08:44.5067206Z 2025-12-04T12:08:44.5067292Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5067417Z 2025-12-04T12:08:44.5067419Z 2025-12-04T12:08:44.5067496Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:08:44.5067697Z Process 2 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:08:44.5068084Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_clip_grad_norm/distributed.fsdp.test_fsdp_clip_grad_norm-47b9eab10e2da3f4.xml - 2025-12-04T12:08:44.5068437Z =========================== short test summary info ============================ 2025-12-04T12:08:44.5068780Z FAILED [9.8227s] distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_low_precision_grads_cuda - RuntimeError: Process 2 exited with error code 10 and exception: 2025-12-04T12:08:44.5069104Z Traceback (most recent call last): 2025-12-04T12:08:44.5069345Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5069587Z getattr(self, test_name)() 2025-12-04T12:08:44.5069815Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5070045Z fn() 2025-12-04T12:08:44.5070244Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5070474Z method(*args, **kwargs) 2025-12-04T12:08:44.5070721Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5070950Z method(*args, **kwargs) 2025-12-04T12:08:44.5071166Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5071392Z with policy(): 2025-12-04T12:08:44.5071602Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5071830Z raise RuntimeError(msg) 2025-12-04T12:08:44.5072220Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_low_precision_grads_cuda! Caching allocator allocated memory was 512 and is now reported as 92672 on device 2. CUDA driver allocated memory was 2300575744 and is now 3466592256. 2025-12-04T12:08:44.5072573Z 2025-12-04T12:08:44.5072648Z To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5072966Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_low_precision_grads_cuda 2025-12-04T12:08:44.5073213Z 2025-12-04T12:08:44.5073307Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5073497Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:08:44.5073660Z ======================= 1 failed, 1 deselected in 9.83s ======================== 2025-12-04T12:08:44.5073798Z Got exit code 1 2025-12-04T12:08:44.5073893Z Retrying single test... 2025-12-04T12:08:44.5074201Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_clip_grad_norm/distributed.fsdp.test_fsdp_clip_grad_norm-63ab7aad9bf62ce8.xml 2025-12-04T12:08:44.5074510Z ============================= test session starts ============================== 2025-12-04T12:08:44.5074716Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:08:44.5074905Z cachedir: .pytest_cache 2025-12-04T12:08:44.5075123Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:08:44.5075358Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:08:44.5075473Z configfile: pytest.ini 2025-12-04T12:08:44.5075694Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:08:44.5075960Z collecting ... collected 4 items / 3 deselected / 1 selected 2025-12-04T12:08:44.5076270Z stepcurrent: skipping 1 already run items. Running only test/distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_low_precision_grads_cuda 2025-12-04T12:08:44.5076547Z Running 1 items in this shard 2025-12-04T12:08:44.5076619Z 2025-12-04T12:08:44.5076906Z distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_low_precision_grads_cuda I1204 12:07:04.535000 429947 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 430016 2025-12-04T12:08:44.5077400Z I1204 12:07:04.536000 429947 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 430017 2025-12-04T12:08:44.5077752Z I1204 12:07:04.536000 429947 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 430018 2025-12-04T12:08:44.5078093Z I1204 12:07:04.537000 429947 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 430019 2025-12-04T12:08:44.5078779Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5079363Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5079942Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5080518Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5081130Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5081711Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5082291Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5082869Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5083290Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/c10d_logger.py:83: UserWarning: barrier(): using the device under current context. You can specify `device_id` in `init_process_group` to mute this warning. 2025-12-04T12:08:44.5083658Z return func(*args, **kwargs) 2025-12-04T12:08:44.5084023Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_fsdp.py:426: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.5084395Z return FSDP(layer, group, **fsdp_kwargs) 2025-12-04T12:08:44.5084759Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_fsdp.py:426: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.5085129Z return FSDP(layer, group, **fsdp_kwargs) 2025-12-04T12:08:44.5085500Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_fsdp.py:426: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.5085871Z return FSDP(layer, group, **fsdp_kwargs) 2025-12-04T12:08:44.5086239Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_fsdp.py:426: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.5086628Z return FSDP(layer, group, **fsdp_kwargs) 2025-12-04T12:08:44.5086984Z /var/lib/jenkins/pytorch/test/distributed/fsdp/test_fsdp_clip_grad_norm.py:275: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.5087350Z fsdp_model = FSDP( 2025-12-04T12:08:44.5087688Z /var/lib/jenkins/pytorch/test/distributed/fsdp/test_fsdp_clip_grad_norm.py:275: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.5088037Z fsdp_model = FSDP( 2025-12-04T12:08:44.5088373Z /var/lib/jenkins/pytorch/test/distributed/fsdp/test_fsdp_clip_grad_norm.py:275: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.5088718Z fsdp_model = FSDP( 2025-12-04T12:08:44.5089046Z /var/lib/jenkins/pytorch/test/distributed/fsdp/test_fsdp_clip_grad_norm.py:275: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.5089390Z fsdp_model = FSDP( 2025-12-04T12:08:44.5089600Z [rank1]:E1204 12:07:12.554000 430017 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5089943Z [rank1]:E1204 12:07:12.554000 430017 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5090432Z [rank1]:E1204 12:07:12.554000 430017 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5090948Z [rank1]:E1204 12:07:12.554000 430017 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5091431Z [rank1]:E1204 12:07:12.554000 430017 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5091882Z [rank1]:E1204 12:07:12.554000 430017 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5092323Z [rank1]:E1204 12:07:12.554000 430017 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5092822Z [rank1]:E1204 12:07:12.554000 430017 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5093292Z [rank1]:E1204 12:07:12.554000 430017 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5093753Z [rank1]:E1204 12:07:12.554000 430017 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5094214Z [rank1]:E1204 12:07:12.554000 430017 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5094667Z [rank1]:E1204 12:07:12.554000 430017 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5095121Z [rank1]:E1204 12:07:12.554000 430017 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5095586Z [rank1]:E1204 12:07:12.554000 430017 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5096239Z [rank1]:E1204 12:07:12.554000 430017 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_low_precision_grads_cuda! Caching allocator allocated memory was 512 and is now reported as 92672 on device 1. CUDA driver allocated memory was 2317352960 and is now 3483369472. 2025-12-04T12:08:44.5096852Z [rank1]:E1204 12:07:12.554000 430017 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5097205Z [rank1]:E1204 12:07:12.554000 430017 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5097779Z [rank1]:E1204 12:07:12.554000 430017 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_low_precision_grads_cuda 2025-12-04T12:08:44.5098266Z [rank1]:E1204 12:07:12.554000 430017 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5098632Z [rank1]:E1204 12:07:12.554000 430017 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5099038Z [rank1]:E1204 12:07:12.554000 430017 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:08:44.5099274Z dist init r=1, world=4 2025-12-04T12:08:44.5099477Z [rank2]:E1204 12:07:12.559000 430018 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5099816Z [rank2]:E1204 12:07:12.559000 430018 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5100304Z [rank2]:E1204 12:07:12.559000 430018 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5100824Z [rank2]:E1204 12:07:12.559000 430018 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5101306Z [rank2]:E1204 12:07:12.559000 430018 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5101755Z [rank2]:E1204 12:07:12.559000 430018 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5102222Z [rank2]:E1204 12:07:12.559000 430018 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5102686Z [rank2]:E1204 12:07:12.559000 430018 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5103154Z [rank2]:E1204 12:07:12.559000 430018 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5103619Z [rank2]:E1204 12:07:12.559000 430018 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5104084Z [rank2]:E1204 12:07:12.559000 430018 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5104544Z [rank2]:E1204 12:07:12.559000 430018 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5104998Z [rank2]:E1204 12:07:12.559000 430018 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5105482Z [rank2]:E1204 12:07:12.559000 430018 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5106131Z [rank2]:E1204 12:07:12.559000 430018 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_low_precision_grads_cuda! Caching allocator allocated memory was 512 and is now reported as 92672 on device 2. CUDA driver allocated memory was 2300575744 and is now 3466592256. 2025-12-04T12:08:44.5106726Z [rank2]:E1204 12:07:12.559000 430018 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5107078Z [rank2]:E1204 12:07:12.559000 430018 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5107646Z [rank2]:E1204 12:07:12.559000 430018 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_low_precision_grads_cuda 2025-12-04T12:08:44.5108140Z [rank2]:E1204 12:07:12.559000 430018 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5108504Z [rank2]:E1204 12:07:12.559000 430018 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5108918Z [rank2]:E1204 12:07:12.559000 430018 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:08:44.5109161Z dist init r=2, world=4 2025-12-04T12:08:44.5109367Z [rank3]:E1204 12:07:12.614000 430019 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5109704Z [rank3]:E1204 12:07:12.614000 430019 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5110191Z [rank3]:E1204 12:07:12.614000 430019 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5110707Z [rank3]:E1204 12:07:12.614000 430019 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5111224Z [rank3]:E1204 12:07:12.614000 430019 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5111671Z [rank3]:E1204 12:07:12.614000 430019 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5112109Z [rank3]:E1204 12:07:12.614000 430019 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5112572Z [rank3]:E1204 12:07:12.614000 430019 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5113037Z [rank3]:E1204 12:07:12.614000 430019 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5113498Z [rank3]:E1204 12:07:12.614000 430019 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5113962Z [rank3]:E1204 12:07:12.614000 430019 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5114426Z [rank3]:E1204 12:07:12.614000 430019 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5114882Z [rank3]:E1204 12:07:12.614000 430019 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5115361Z [rank3]:E1204 12:07:12.614000 430019 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5115997Z [rank3]:E1204 12:07:12.614000 430019 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_low_precision_grads_cuda! Caching allocator allocated memory was 512 and is now reported as 92672 on device 3. CUDA driver allocated memory was 2250244096 and is now 3416260608. 2025-12-04T12:08:44.5116593Z [rank3]:E1204 12:07:12.614000 430019 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5116942Z [rank3]:E1204 12:07:12.614000 430019 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5117517Z [rank3]:E1204 12:07:12.614000 430019 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_low_precision_grads_cuda 2025-12-04T12:08:44.5118001Z [rank3]:E1204 12:07:12.614000 430019 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5118368Z [rank3]:E1204 12:07:12.614000 430019 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5118781Z [rank3]:E1204 12:07:12.614000 430019 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:08:44.5119025Z dist init r=3, world=4 2025-12-04T12:08:44.5119232Z [rank0]:E1204 12:07:12.631000 430016 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5119573Z [rank0]:E1204 12:07:12.631000 430016 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5120058Z [rank0]:E1204 12:07:12.631000 430016 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5120557Z [rank0]:E1204 12:07:12.631000 430016 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5121065Z [rank0]:E1204 12:07:12.631000 430016 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5121514Z [rank0]:E1204 12:07:12.631000 430016 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5121953Z [rank0]:E1204 12:07:12.631000 430016 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5122423Z [rank0]:E1204 12:07:12.631000 430016 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5122897Z [rank0]:E1204 12:07:12.631000 430016 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5123363Z [rank0]:E1204 12:07:12.631000 430016 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5123840Z [rank0]:E1204 12:07:12.631000 430016 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5124312Z [rank0]:E1204 12:07:12.631000 430016 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5124767Z [rank0]:E1204 12:07:12.631000 430016 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5125237Z [rank0]:E1204 12:07:12.631000 430016 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5125871Z [rank0]:E1204 12:07:12.631000 430016 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_low_precision_grads_cuda! Caching allocator allocated memory was 512 and is now reported as 92672 on device 0. CUDA driver allocated memory was 2459959296 and is now 3625975808. 2025-12-04T12:08:44.5126465Z [rank0]:E1204 12:07:12.631000 430016 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5126818Z [rank0]:E1204 12:07:12.631000 430016 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5127387Z [rank0]:E1204 12:07:12.631000 430016 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_low_precision_grads_cuda 2025-12-04T12:08:44.5127874Z [rank0]:E1204 12:07:12.631000 430016 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5128241Z [rank0]:E1204 12:07:12.631000 430016 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5128654Z [rank0]:E1204 12:07:12.631000 430016 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:08:44.5128899Z dist init r=0, world=4 2025-12-04T12:08:44.5129295Z [rank0]:[W1204 12:07:12.768459946 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:08:44.5129703Z FAILED [9.8222s] [100%] 2025-12-04T12:08:44.5129772Z 2025-12-04T12:08:44.5129864Z =================================== FAILURES =================================== 2025-12-04T12:08:44.5130057Z ______________ TestClipGradNormCUDA.test_low_precision_grads_cuda ______________ 2025-12-04T12:08:44.5130231Z Traceback (most recent call last): 2025-12-04T12:08:44.5130481Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:08:44.5130759Z self._join_processes(fn) 2025-12-04T12:08:44.5131007Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:08:44.5131274Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:08:44.5131544Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:08:44.5131809Z raise RuntimeError(error) 2025-12-04T12:08:44.5131971Z RuntimeError: Process 1 exited with error code 10 and exception: 2025-12-04T12:08:44.5132136Z Traceback (most recent call last): 2025-12-04T12:08:44.5132378Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5132637Z getattr(self, test_name)() 2025-12-04T12:08:44.5132871Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5133107Z fn() 2025-12-04T12:08:44.5133329Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5133564Z method(*args, **kwargs) 2025-12-04T12:08:44.5133788Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5134019Z method(*args, **kwargs) 2025-12-04T12:08:44.5134246Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5134475Z with policy(): 2025-12-04T12:08:44.5134689Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5134925Z raise RuntimeError(msg) 2025-12-04T12:08:44.5135318Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_low_precision_grads_cuda! Caching allocator allocated memory was 512 and is now reported as 92672 on device 1. CUDA driver allocated memory was 2317352960 and is now 3483369472. 2025-12-04T12:08:44.5135675Z 2025-12-04T12:08:44.5135755Z To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5136079Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_low_precision_grads_cuda 2025-12-04T12:08:44.5136323Z 2025-12-04T12:08:44.5136417Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5136542Z 2025-12-04T12:08:44.5136543Z 2025-12-04T12:08:44.5136625Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:08:44.5136830Z Process 1 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:08:44.5137215Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_clip_grad_norm/distributed.fsdp.test_fsdp_clip_grad_norm-63ab7aad9bf62ce8.xml - 2025-12-04T12:08:44.5137573Z =========================== short test summary info ============================ 2025-12-04T12:08:44.5137901Z FAILED [9.8222s] distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_low_precision_grads_cuda - RuntimeError: Process 1 exited with error code 10 and exception: 2025-12-04T12:08:44.5138213Z Traceback (most recent call last): 2025-12-04T12:08:44.5138491Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5138738Z getattr(self, test_name)() 2025-12-04T12:08:44.5138974Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5139211Z fn() 2025-12-04T12:08:44.5139414Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5139648Z method(*args, **kwargs) 2025-12-04T12:08:44.5139872Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5140104Z method(*args, **kwargs) 2025-12-04T12:08:44.5140326Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5140555Z with policy(): 2025-12-04T12:08:44.5140809Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5141043Z raise RuntimeError(msg) 2025-12-04T12:08:44.5141435Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_low_precision_grads_cuda! Caching allocator allocated memory was 512 and is now reported as 92672 on device 1. CUDA driver allocated memory was 2317352960 and is now 3483369472. 2025-12-04T12:08:44.5141809Z 2025-12-04T12:08:44.5141884Z To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5142224Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_low_precision_grads_cuda 2025-12-04T12:08:44.5142226Z 2025-12-04T12:08:44.5142319Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5142384Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:08:44.5142456Z ======================= 1 failed, 3 deselected in 9.84s ======================== 2025-12-04T12:08:44.5142495Z Got exit code 1 2025-12-04T12:08:44.5142541Z Retrying single test... 2025-12-04T12:08:44.5142754Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_clip_grad_norm/distributed.fsdp.test_fsdp_clip_grad_norm-9bf1e7b8bfc7eebd.xml 2025-12-04T12:08:44.5146313Z ============================= test session starts ============================== 2025-12-04T12:08:44.5146443Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:08:44.5146495Z cachedir: .pytest_cache 2025-12-04T12:08:44.5146661Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:08:44.5146711Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:08:44.5146751Z configfile: pytest.ini 2025-12-04T12:08:44.5146916Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:08:44.5146988Z collecting ... collected 4 items / 3 deselected / 1 selected 2025-12-04T12:08:44.5147203Z stepcurrent: skipping 1 already run items. Running only test/distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_low_precision_grads_cuda 2025-12-04T12:08:44.5147250Z Running 1 items in this shard 2025-12-04T12:08:44.5147252Z 2025-12-04T12:08:44.5147543Z distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_low_precision_grads_cuda I1204 12:07:16.966000 430349 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 430418 2025-12-04T12:08:44.5147703Z I1204 12:07:16.967000 430349 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 430419 2025-12-04T12:08:44.5147855Z I1204 12:07:16.967000 430349 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 430420 2025-12-04T12:08:44.5148059Z I1204 12:07:16.968000 430349 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 430421 2025-12-04T12:08:44.5148554Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5148622Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5149105Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5149168Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5149650Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5149732Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5150212Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5150268Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5150557Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/c10d_logger.py:83: UserWarning: barrier(): using the device under current context. You can specify `device_id` in `init_process_group` to mute this warning. 2025-12-04T12:08:44.5150641Z return func(*args, **kwargs) 2025-12-04T12:08:44.5150930Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_fsdp.py:426: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.5150982Z return FSDP(layer, group, **fsdp_kwargs) 2025-12-04T12:08:44.5151269Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_fsdp.py:426: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.5151320Z return FSDP(layer, group, **fsdp_kwargs) 2025-12-04T12:08:44.5151600Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_fsdp.py:426: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.5151653Z return FSDP(layer, group, **fsdp_kwargs) 2025-12-04T12:08:44.5151936Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_fsdp.py:426: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.5151991Z return FSDP(layer, group, **fsdp_kwargs) 2025-12-04T12:08:44.5152261Z /var/lib/jenkins/pytorch/test/distributed/fsdp/test_fsdp_clip_grad_norm.py:275: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.5152301Z fsdp_model = FSDP( 2025-12-04T12:08:44.5152594Z /var/lib/jenkins/pytorch/test/distributed/fsdp/test_fsdp_clip_grad_norm.py:275: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.5152633Z fsdp_model = FSDP( 2025-12-04T12:08:44.5152897Z /var/lib/jenkins/pytorch/test/distributed/fsdp/test_fsdp_clip_grad_norm.py:275: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.5152937Z fsdp_model = FSDP( 2025-12-04T12:08:44.5153204Z /var/lib/jenkins/pytorch/test/distributed/fsdp/test_fsdp_clip_grad_norm.py:275: FutureWarning: The `NO_SHARD` sharding strategy is deprecated. If having issues, please use `DistributedDataParallel` instead. 2025-12-04T12:08:44.5153245Z fsdp_model = FSDP( 2025-12-04T12:08:44.5153391Z [rank1]:E1204 12:07:24.924000 430419 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5153554Z [rank1]:E1204 12:07:24.924000 430419 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5153843Z [rank1]:E1204 12:07:24.924000 430419 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5154011Z [rank1]:E1204 12:07:24.924000 430419 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5154315Z [rank1]:E1204 12:07:24.924000 430419 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5154443Z [rank1]:E1204 12:07:24.924000 430419 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5154723Z [rank1]:E1204 12:07:24.924000 430419 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5154876Z [rank1]:E1204 12:07:24.924000 430419 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5155151Z [rank1]:E1204 12:07:24.924000 430419 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5155306Z [rank1]:E1204 12:07:24.924000 430419 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5155580Z [rank1]:E1204 12:07:24.924000 430419 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5155719Z [rank1]:E1204 12:07:24.924000 430419 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5155995Z [rank1]:E1204 12:07:24.924000 430419 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5156141Z [rank1]:E1204 12:07:24.924000 430419 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5156592Z [rank1]:E1204 12:07:24.924000 430419 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_low_precision_grads_cuda! Caching allocator allocated memory was 512 and is now reported as 92672 on device 1. CUDA driver allocated memory was 2317352960 and is now 3483369472. 2025-12-04T12:08:44.5156731Z [rank1]:E1204 12:07:24.924000 430419 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5156928Z [rank1]:E1204 12:07:24.924000 430419 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5157265Z [rank1]:E1204 12:07:24.924000 430419 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_low_precision_grads_cuda 2025-12-04T12:08:44.5157381Z [rank1]:E1204 12:07:24.924000 430419 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5157595Z [rank1]:E1204 12:07:24.924000 430419 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5157760Z [rank1]:E1204 12:07:24.924000 430419 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:08:44.5157801Z dist init r=1, world=4 2025-12-04T12:08:44.5157938Z [rank0]:E1204 12:07:24.937000 430418 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5158107Z [rank0]:E1204 12:07:24.937000 430418 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5158391Z [rank0]:E1204 12:07:24.937000 430418 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5158560Z [rank0]:E1204 12:07:24.937000 430418 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5158846Z [rank0]:E1204 12:07:24.937000 430418 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5158972Z [rank0]:E1204 12:07:24.937000 430418 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5159248Z [rank0]:E1204 12:07:24.937000 430418 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5159395Z [rank0]:E1204 12:07:24.937000 430418 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5159668Z [rank0]:E1204 12:07:24.937000 430418 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5159815Z [rank0]:E1204 12:07:24.937000 430418 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5160091Z [rank0]:E1204 12:07:24.937000 430418 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5160225Z [rank0]:E1204 12:07:24.937000 430418 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5160632Z [rank0]:E1204 12:07:24.937000 430418 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5160782Z [rank0]:E1204 12:07:24.937000 430418 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5161260Z [rank0]:E1204 12:07:24.937000 430418 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_low_precision_grads_cuda! Caching allocator allocated memory was 512 and is now reported as 92672 on device 0. CUDA driver allocated memory was 2459959296 and is now 3625975808. 2025-12-04T12:08:44.5161378Z [rank0]:E1204 12:07:24.937000 430418 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5161572Z [rank0]:E1204 12:07:24.937000 430418 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5161910Z [rank0]:E1204 12:07:24.937000 430418 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_low_precision_grads_cuda 2025-12-04T12:08:44.5162024Z [rank0]:E1204 12:07:24.937000 430418 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5162237Z [rank0]:E1204 12:07:24.937000 430418 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5162404Z [rank0]:E1204 12:07:24.937000 430418 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:08:44.5162460Z dist init r=0, world=4 2025-12-04T12:08:44.5162600Z [rank3]:E1204 12:07:24.944000 430421 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5162779Z [rank3]:E1204 12:07:24.944000 430421 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5163067Z [rank3]:E1204 12:07:24.944000 430421 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5163220Z [rank3]:E1204 12:07:24.944000 430421 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5163505Z [rank3]:E1204 12:07:24.944000 430421 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5163628Z [rank3]:E1204 12:07:24.944000 430421 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5163905Z [rank3]:E1204 12:07:24.944000 430421 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5164057Z [rank3]:E1204 12:07:24.944000 430421 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5164331Z [rank3]:E1204 12:07:24.944000 430421 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5164481Z [rank3]:E1204 12:07:24.944000 430421 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5164754Z [rank3]:E1204 12:07:24.944000 430421 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5164898Z [rank3]:E1204 12:07:24.944000 430421 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5165175Z [rank3]:E1204 12:07:24.944000 430421 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5165353Z [rank3]:E1204 12:07:24.944000 430421 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5165796Z [rank3]:E1204 12:07:24.944000 430421 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_low_precision_grads_cuda! Caching allocator allocated memory was 512 and is now reported as 92672 on device 3. CUDA driver allocated memory was 2250244096 and is now 3416260608. 2025-12-04T12:08:44.5165911Z [rank3]:E1204 12:07:24.944000 430421 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5166103Z [rank3]:E1204 12:07:24.944000 430421 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5166435Z [rank3]:E1204 12:07:24.944000 430421 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_low_precision_grads_cuda 2025-12-04T12:08:44.5166548Z [rank3]:E1204 12:07:24.944000 430421 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5166768Z [rank3]:E1204 12:07:24.944000 430421 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5166931Z [rank3]:E1204 12:07:24.944000 430421 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:08:44.5166983Z dist init r=3, world=4 2025-12-04T12:08:44.5167121Z [rank2]:E1204 12:07:24.985000 430420 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5167281Z [rank2]:E1204 12:07:24.985000 430420 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5167570Z [rank2]:E1204 12:07:24.985000 430420 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5167728Z [rank2]:E1204 12:07:24.985000 430420 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5168011Z [rank2]:E1204 12:07:24.985000 430420 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5168139Z [rank2]:E1204 12:07:24.985000 430420 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5168416Z [rank2]:E1204 12:07:24.985000 430420 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5168565Z [rank2]:E1204 12:07:24.985000 430420 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5168840Z [rank2]:E1204 12:07:24.985000 430420 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5168984Z [rank2]:E1204 12:07:24.985000 430420 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5169258Z [rank2]:E1204 12:07:24.985000 430420 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5169393Z [rank2]:E1204 12:07:24.985000 430420 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5169695Z [rank2]:E1204 12:07:24.985000 430420 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5169842Z [rank2]:E1204 12:07:24.985000 430420 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5170286Z [rank2]:E1204 12:07:24.985000 430420 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_low_precision_grads_cuda! Caching allocator allocated memory was 512 and is now reported as 92672 on device 2. CUDA driver allocated memory was 2300575744 and is now 3466592256. 2025-12-04T12:08:44.5170402Z [rank2]:E1204 12:07:24.985000 430420 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5170641Z [rank2]:E1204 12:07:24.985000 430420 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5170978Z [rank2]:E1204 12:07:24.985000 430420 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_low_precision_grads_cuda 2025-12-04T12:08:44.5171104Z [rank2]:E1204 12:07:24.985000 430420 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5171328Z [rank2]:E1204 12:07:24.985000 430420 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5171490Z [rank2]:E1204 12:07:24.985000 430420 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:08:44.5171533Z dist init r=2, world=4 2025-12-04T12:08:44.5171869Z [rank0]:[W1204 12:07:25.995582833 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:08:44.5171910Z FAILED [9.7216s] [100%] 2025-12-04T12:08:44.5171912Z 2025-12-04T12:08:44.5171972Z =================================== FAILURES =================================== 2025-12-04T12:08:44.5172068Z ______________ TestClipGradNormCUDA.test_low_precision_grads_cuda ______________ 2025-12-04T12:08:44.5172122Z Traceback (most recent call last): 2025-12-04T12:08:44.5172285Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:08:44.5172333Z self._join_processes(fn) 2025-12-04T12:08:44.5172505Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:08:44.5172564Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:08:44.5172742Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:08:44.5172789Z raise RuntimeError(error) 2025-12-04T12:08:44.5172870Z RuntimeError: Process 1 exited with error code 10 and exception: 2025-12-04T12:08:44.5172918Z Traceback (most recent call last): 2025-12-04T12:08:44.5173078Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5173125Z getattr(self, test_name)() 2025-12-04T12:08:44.5173283Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5173318Z fn() 2025-12-04T12:08:44.5173467Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5173571Z method(*args, **kwargs) 2025-12-04T12:08:44.5173722Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5173767Z method(*args, **kwargs) 2025-12-04T12:08:44.5173920Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5173960Z with policy(): 2025-12-04T12:08:44.5174113Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5174156Z raise RuntimeError(msg) 2025-12-04T12:08:44.5174481Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_low_precision_grads_cuda! Caching allocator allocated memory was 512 and is now reported as 92672 on device 1. CUDA driver allocated memory was 2317352960 and is now 3483369472. 2025-12-04T12:08:44.5174484Z 2025-12-04T12:08:44.5174559Z To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5174771Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_low_precision_grads_cuda 2025-12-04T12:08:44.5174785Z 2025-12-04T12:08:44.5174873Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5174875Z 2025-12-04T12:08:44.5174877Z 2025-12-04T12:08:44.5174956Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:08:44.5175057Z Process 1 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:08:44.5175315Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_clip_grad_norm/distributed.fsdp.test_fsdp_clip_grad_norm-9bf1e7b8bfc7eebd.xml - 2025-12-04T12:08:44.5175378Z =========================== short test summary info ============================ 2025-12-04T12:08:44.5175607Z FAILED [9.7216s] distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_low_precision_grads_cuda - RuntimeError: Process 1 exited with error code 10 and exception: 2025-12-04T12:08:44.5175655Z Traceback (most recent call last): 2025-12-04T12:08:44.5175819Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5175865Z getattr(self, test_name)() 2025-12-04T12:08:44.5176022Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5176060Z fn() 2025-12-04T12:08:44.5176208Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5176253Z method(*args, **kwargs) 2025-12-04T12:08:44.5176402Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5176444Z method(*args, **kwargs) 2025-12-04T12:08:44.5176596Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5176638Z with policy(): 2025-12-04T12:08:44.5176790Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5176833Z raise RuntimeError(msg) 2025-12-04T12:08:44.5177150Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_low_precision_grads_cuda! Caching allocator allocated memory was 512 and is now reported as 92672 on device 1. CUDA driver allocated memory was 2317352960 and is now 3483369472. 2025-12-04T12:08:44.5177159Z 2025-12-04T12:08:44.5177233Z To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5177465Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_low_precision_grads_cuda 2025-12-04T12:08:44.5177467Z 2025-12-04T12:08:44.5177556Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5177624Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:08:44.5177687Z ======================= 1 failed, 3 deselected in 9.73s ======================== 2025-12-04T12:08:44.5177727Z Got exit code 1 2025-12-04T12:08:44.5177886Z FAILED CONSISTENTLY: test/distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_low_precision_grads_cuda 2025-12-04T12:08:44.5178014Z Test failed consistently, continuing with the rest of the tests due to continue-through-error being set 2025-12-04T12:08:44.5178229Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_clip_grad_norm/distributed.fsdp.test_fsdp_clip_grad_norm-b530aeb44bfe7412.xml 2025-12-04T12:08:44.5178292Z ============================= test session starts ============================== 2025-12-04T12:08:44.5178408Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:08:44.5178450Z cachedir: .pytest_cache 2025-12-04T12:08:44.5178608Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:08:44.5178673Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:08:44.5178712Z configfile: pytest.ini 2025-12-04T12:08:44.5178880Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:08:44.5178971Z collecting ... collected 4 items / 2 deselected / 2 selected 2025-12-04T12:08:44.5179032Z stepcurrent: skipping 2 already run items. 2025-12-04T12:08:44.5179075Z Running 2 items in this shard 2025-12-04T12:08:44.5179080Z 2025-12-04T12:08:44.5179362Z distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_no_gradients_cuda I1204 12:07:29.317000 430751 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 430820 2025-12-04T12:08:44.5179519Z I1204 12:07:29.317000 430751 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 430821 2025-12-04T12:08:44.5179671Z I1204 12:07:29.318000 430751 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 430822 2025-12-04T12:08:44.5179822Z I1204 12:07:29.318000 430751 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 430823 2025-12-04T12:08:44.5180317Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5180383Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5180906Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5180967Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5181457Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5181544Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5182025Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5182087Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5182378Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/c10d_logger.py:83: UserWarning: barrier(): using the device under current context. You can specify `device_id` in `init_process_group` to mute this warning. 2025-12-04T12:08:44.5182426Z return func(*args, **kwargs) 2025-12-04T12:08:44.5182911Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5182983Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5183461Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5183535Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5184015Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5184072Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5184551Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5184608Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5184894Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/c10d_logger.py:83: UserWarning: barrier(): using the device under current context. You can specify `device_id` in `init_process_group` to mute this warning. 2025-12-04T12:08:44.5184937Z return func(*args, **kwargs) 2025-12-04T12:08:44.5185084Z [rank3]:E1204 12:07:36.036000 430823 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5185247Z [rank3]:E1204 12:07:36.036000 430823 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5185536Z [rank3]:E1204 12:07:36.036000 430823 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5185692Z [rank3]:E1204 12:07:36.036000 430823 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5186003Z [rank3]:E1204 12:07:36.036000 430823 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5186129Z [rank3]:E1204 12:07:36.036000 430823 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5186403Z [rank3]:E1204 12:07:36.036000 430823 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5186554Z [rank3]:E1204 12:07:36.036000 430823 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5186826Z [rank3]:E1204 12:07:36.036000 430823 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5186976Z [rank3]:E1204 12:07:36.036000 430823 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5187250Z [rank3]:E1204 12:07:36.036000 430823 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5187397Z [rank3]:E1204 12:07:36.036000 430823 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5187690Z [rank3]:E1204 12:07:36.036000 430823 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5187835Z [rank3]:E1204 12:07:36.036000 430823 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5188275Z [rank3]:E1204 12:07:36.036000 430823 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_no_gradients_cuda! Caching allocator allocated memory was 512 and is now reported as 6656 on device 3. CUDA driver allocated memory was 2250244096 and is now 2904555520. 2025-12-04T12:08:44.5188390Z [rank3]:E1204 12:07:36.036000 430823 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5188585Z [rank3]:E1204 12:07:36.036000 430823 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5188914Z [rank3]:E1204 12:07:36.036000 430823 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_no_gradients_cuda 2025-12-04T12:08:44.5189028Z [rank3]:E1204 12:07:36.036000 430823 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5189240Z [rank3]:E1204 12:07:36.036000 430823 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5189404Z [rank3]:E1204 12:07:36.036000 430823 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:08:44.5189444Z dist init r=3, world=4 2025-12-04T12:08:44.5189581Z [rank0]:E1204 12:07:36.039000 430820 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5189744Z [rank0]:E1204 12:07:36.039000 430820 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5190053Z [rank0]:E1204 12:07:36.039000 430820 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5190209Z [rank0]:E1204 12:07:36.039000 430820 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5190494Z [rank0]:E1204 12:07:36.039000 430820 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5190652Z [rank0]:E1204 12:07:36.039000 430820 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5190929Z [rank0]:E1204 12:07:36.039000 430820 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5191075Z [rank0]:E1204 12:07:36.039000 430820 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5191352Z [rank0]:E1204 12:07:36.039000 430820 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5191500Z [rank0]:E1204 12:07:36.039000 430820 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5191789Z [rank0]:E1204 12:07:36.039000 430820 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5191946Z [rank0]:E1204 12:07:36.039000 430820 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5192222Z [rank0]:E1204 12:07:36.039000 430820 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5192373Z [rank0]:E1204 12:07:36.039000 430820 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5192808Z [rank0]:E1204 12:07:36.039000 430820 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_no_gradients_cuda! Caching allocator allocated memory was 512 and is now reported as 6656 on device 0. CUDA driver allocated memory was 2459959296 and is now 3114270720. 2025-12-04T12:08:44.5192930Z [rank0]:E1204 12:07:36.039000 430820 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5193125Z [rank0]:E1204 12:07:36.039000 430820 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5193450Z [rank0]:E1204 12:07:36.039000 430820 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_no_gradients_cuda 2025-12-04T12:08:44.5193566Z [rank0]:E1204 12:07:36.039000 430820 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5193777Z [rank0]:E1204 12:07:36.039000 430820 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5193945Z [rank0]:E1204 12:07:36.039000 430820 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:08:44.5193984Z dist init r=0, world=4 2025-12-04T12:08:44.5194125Z [rank2]:E1204 12:07:36.090000 430822 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5194315Z [rank2]:E1204 12:07:36.090000 430822 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5194602Z [rank2]:E1204 12:07:36.090000 430822 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5194755Z [rank2]:E1204 12:07:36.090000 430822 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5195040Z [rank2]:E1204 12:07:36.090000 430822 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5195164Z [rank2]:E1204 12:07:36.090000 430822 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5195441Z [rank2]:E1204 12:07:36.090000 430822 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5195588Z [rank2]:E1204 12:07:36.090000 430822 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5195882Z [rank2]:E1204 12:07:36.090000 430822 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5196040Z [rank2]:E1204 12:07:36.090000 430822 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5196313Z [rank2]:E1204 12:07:36.090000 430822 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5196449Z [rank2]:E1204 12:07:36.090000 430822 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5196726Z [rank2]:E1204 12:07:36.090000 430822 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5196873Z [rank2]:E1204 12:07:36.090000 430822 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5197305Z [rank2]:E1204 12:07:36.090000 430822 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_no_gradients_cuda! Caching allocator allocated memory was 512 and is now reported as 6656 on device 2. CUDA driver allocated memory was 2300575744 and is now 2954887168. 2025-12-04T12:08:44.5197421Z [rank2]:E1204 12:07:36.090000 430822 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5197616Z [rank2]:E1204 12:07:36.090000 430822 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5197938Z [rank2]:E1204 12:07:36.090000 430822 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_no_gradients_cuda 2025-12-04T12:08:44.5198051Z [rank2]:E1204 12:07:36.090000 430822 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5198262Z [rank2]:E1204 12:07:36.090000 430822 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5198422Z [rank2]:E1204 12:07:36.090000 430822 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:08:44.5198463Z dist init r=2, world=4 2025-12-04T12:08:44.5198625Z [rank1]:E1204 12:07:36.093000 430821 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5198782Z [rank1]:E1204 12:07:36.093000 430821 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5199070Z [rank1]:E1204 12:07:36.093000 430821 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5199223Z [rank1]:E1204 12:07:36.093000 430821 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5199510Z [rank1]:E1204 12:07:36.093000 430821 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5199633Z [rank1]:E1204 12:07:36.093000 430821 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5199910Z [rank1]:E1204 12:07:36.093000 430821 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5200067Z [rank1]:E1204 12:07:36.093000 430821 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5200354Z [rank1]:E1204 12:07:36.093000 430821 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5200506Z [rank1]:E1204 12:07:36.093000 430821 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5200823Z [rank1]:E1204 12:07:36.093000 430821 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5200960Z [rank1]:E1204 12:07:36.093000 430821 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5201235Z [rank1]:E1204 12:07:36.093000 430821 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5201385Z [rank1]:E1204 12:07:36.093000 430821 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5201819Z [rank1]:E1204 12:07:36.093000 430821 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_no_gradients_cuda! Caching allocator allocated memory was 512 and is now reported as 6656 on device 1. CUDA driver allocated memory was 2317352960 and is now 2971664384. 2025-12-04T12:08:44.5201934Z [rank1]:E1204 12:07:36.093000 430821 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5202130Z [rank1]:E1204 12:07:36.093000 430821 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5202454Z [rank1]:E1204 12:07:36.093000 430821 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_no_gradients_cuda 2025-12-04T12:08:44.5202572Z [rank1]:E1204 12:07:36.093000 430821 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5202814Z [rank1]:E1204 12:07:36.093000 430821 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5202978Z [rank1]:E1204 12:07:36.093000 430821 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:08:44.5203017Z dist init r=1, world=4 2025-12-04T12:08:44.5203059Z FAILED [7.7185s] [ 50%] 2025-12-04T12:08:44.5203061Z 2025-12-04T12:08:44.5203119Z =================================== FAILURES =================================== 2025-12-04T12:08:44.5203207Z _________________ TestClipGradNormCUDA.test_no_gradients_cuda __________________ 2025-12-04T12:08:44.5203255Z Traceback (most recent call last): 2025-12-04T12:08:44.5203418Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:08:44.5203466Z self._join_processes(fn) 2025-12-04T12:08:44.5203638Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:08:44.5203698Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:08:44.5203876Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:08:44.5203920Z raise RuntimeError(error) 2025-12-04T12:08:44.5204016Z RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:08:44.5204065Z Traceback (most recent call last): 2025-12-04T12:08:44.5204225Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5204284Z getattr(self, test_name)() 2025-12-04T12:08:44.5204439Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5204476Z fn() 2025-12-04T12:08:44.5204626Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5204670Z method(*args, **kwargs) 2025-12-04T12:08:44.5204821Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5204862Z method(*args, **kwargs) 2025-12-04T12:08:44.5205012Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5205052Z with policy(): 2025-12-04T12:08:44.5205202Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5205245Z raise RuntimeError(msg) 2025-12-04T12:08:44.5205552Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_no_gradients_cuda! Caching allocator allocated memory was 512 and is now reported as 6656 on device 0. CUDA driver allocated memory was 2459959296 and is now 3114270720. 2025-12-04T12:08:44.5205554Z 2025-12-04T12:08:44.5205631Z To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5205828Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_no_gradients_cuda 2025-12-04T12:08:44.5205831Z 2025-12-04T12:08:44.5205920Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5205922Z 2025-12-04T12:08:44.5205924Z 2025-12-04T12:08:44.5206000Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:08:44.5206089Z Process 0 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:08:44.5206350Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_clip_grad_norm/distributed.fsdp.test_fsdp_clip_grad_norm-b530aeb44bfe7412.xml - 2025-12-04T12:08:44.5206409Z =========================== short test summary info ============================ 2025-12-04T12:08:44.5206651Z FAILED [7.7185s] distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_no_gradients_cuda - RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:08:44.5206696Z Traceback (most recent call last): 2025-12-04T12:08:44.5206859Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5206901Z getattr(self, test_name)() 2025-12-04T12:08:44.5207058Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5207093Z fn() 2025-12-04T12:08:44.5207245Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5207285Z method(*args, **kwargs) 2025-12-04T12:08:44.5207434Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5207474Z method(*args, **kwargs) 2025-12-04T12:08:44.5207622Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5207658Z with policy(): 2025-12-04T12:08:44.5207808Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5207859Z raise RuntimeError(msg) 2025-12-04T12:08:44.5208167Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_no_gradients_cuda! Caching allocator allocated memory was 512 and is now reported as 6656 on device 0. CUDA driver allocated memory was 2459959296 and is now 3114270720. 2025-12-04T12:08:44.5208182Z 2025-12-04T12:08:44.5208259Z To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5208459Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_no_gradients_cuda 2025-12-04T12:08:44.5208462Z 2025-12-04T12:08:44.5208550Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5208612Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:08:44.5208678Z ======================= 1 failed, 2 deselected in 7.73s ======================== 2025-12-04T12:08:44.5208715Z Got exit code 1 2025-12-04T12:08:44.5208757Z Retrying single test... 2025-12-04T12:08:44.5208969Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_clip_grad_norm/distributed.fsdp.test_fsdp_clip_grad_norm-08b4d7758bae657f.xml 2025-12-04T12:08:44.5209029Z ============================= test session starts ============================== 2025-12-04T12:08:44.5209139Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:08:44.5209181Z cachedir: .pytest_cache 2025-12-04T12:08:44.5209338Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:08:44.5209384Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:08:44.5209425Z configfile: pytest.ini 2025-12-04T12:08:44.5209587Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:08:44.5209658Z collecting ... collected 4 items / 3 deselected / 1 selected 2025-12-04T12:08:44.5209855Z stepcurrent: skipping 2 already run items. Running only test/distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_no_gradients_cuda 2025-12-04T12:08:44.5209899Z Running 1 items in this shard 2025-12-04T12:08:44.5209904Z 2025-12-04T12:08:44.5210180Z distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_no_gradients_cuda I1204 12:07:39.617000 431129 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 431198 2025-12-04T12:08:44.5210358Z I1204 12:07:39.618000 431129 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 431199 2025-12-04T12:08:44.5210510Z I1204 12:07:39.618000 431129 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 431200 2025-12-04T12:08:44.5210707Z I1204 12:07:39.619000 431129 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 431201 2025-12-04T12:08:44.5211199Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5211263Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5211747Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5211824Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5212300Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5212372Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5212851Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5212908Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5213195Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/c10d_logger.py:83: UserWarning: barrier(): using the device under current context. You can specify `device_id` in `init_process_group` to mute this warning. 2025-12-04T12:08:44.5213240Z return func(*args, **kwargs) 2025-12-04T12:08:44.5213718Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5213776Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5214251Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5214312Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5214817Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5214874Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5215352Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5215409Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5215693Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/c10d_logger.py:83: UserWarning: barrier(): using the device under current context. You can specify `device_id` in `init_process_group` to mute this warning. 2025-12-04T12:08:44.5215736Z return func(*args, **kwargs) 2025-12-04T12:08:44.5215879Z [rank1]:E1204 12:07:46.403000 431199 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5216039Z [rank1]:E1204 12:07:46.403000 431199 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5216339Z [rank1]:E1204 12:07:46.403000 431199 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5216508Z [rank1]:E1204 12:07:46.403000 431199 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5216791Z [rank1]:E1204 12:07:46.403000 431199 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5216916Z [rank1]:E1204 12:07:46.403000 431199 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5217189Z [rank1]:E1204 12:07:46.403000 431199 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5217338Z [rank1]:E1204 12:07:46.403000 431199 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5217615Z [rank1]:E1204 12:07:46.403000 431199 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5217762Z [rank1]:E1204 12:07:46.403000 431199 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5218036Z [rank1]:E1204 12:07:46.403000 431199 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5218170Z [rank1]:E1204 12:07:46.403000 431199 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5218446Z [rank1]:E1204 12:07:46.403000 431199 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5218592Z [rank1]:E1204 12:07:46.403000 431199 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5219045Z [rank1]:E1204 12:07:46.403000 431199 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_no_gradients_cuda! Caching allocator allocated memory was 512 and is now reported as 6656 on device 1. CUDA driver allocated memory was 2317352960 and is now 2971664384. 2025-12-04T12:08:44.5219159Z [rank1]:E1204 12:07:46.403000 431199 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5219352Z [rank1]:E1204 12:07:46.403000 431199 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5219675Z [rank1]:E1204 12:07:46.403000 431199 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_no_gradients_cuda 2025-12-04T12:08:44.5219788Z [rank1]:E1204 12:07:46.403000 431199 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5219997Z [rank1]:E1204 12:07:46.403000 431199 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5220159Z [rank1]:E1204 12:07:46.403000 431199 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:08:44.5220209Z dist init r=1, world=4 2025-12-04T12:08:44.5220345Z [rank0]:E1204 12:07:46.411000 431198 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5220502Z [rank0]:E1204 12:07:46.411000 431198 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5220833Z [rank0]:E1204 12:07:46.411000 431198 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5220987Z [rank0]:E1204 12:07:46.411000 431198 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5221270Z [rank0]:E1204 12:07:46.411000 431198 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5221393Z [rank0]:E1204 12:07:46.411000 431198 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5221668Z [rank0]:E1204 12:07:46.411000 431198 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5221816Z [rank0]:E1204 12:07:46.411000 431198 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5222093Z [rank0]:E1204 12:07:46.411000 431198 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5222239Z [rank0]:E1204 12:07:46.411000 431198 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5222511Z [rank0]:E1204 12:07:46.411000 431198 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5222649Z [rank0]:E1204 12:07:46.411000 431198 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5222923Z [rank0]:E1204 12:07:46.411000 431198 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5223107Z [rank0]:E1204 12:07:46.411000 431198 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5223537Z [rank0]:E1204 12:07:46.411000 431198 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_no_gradients_cuda! Caching allocator allocated memory was 512 and is now reported as 6656 on device 0. CUDA driver allocated memory was 2459959296 and is now 3114270720. 2025-12-04T12:08:44.5223651Z [rank0]:E1204 12:07:46.411000 431198 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5223843Z [rank0]:E1204 12:07:46.411000 431198 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5224165Z [rank0]:E1204 12:07:46.411000 431198 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_no_gradients_cuda 2025-12-04T12:08:44.5224277Z [rank0]:E1204 12:07:46.411000 431198 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5224485Z [rank0]:E1204 12:07:46.411000 431198 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5224663Z [rank0]:E1204 12:07:46.411000 431198 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:08:44.5224729Z dist init r=0, world=4 2025-12-04T12:08:44.5224866Z [rank2]:E1204 12:07:46.462000 431200 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5225023Z [rank2]:E1204 12:07:46.462000 431200 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5225311Z [rank2]:E1204 12:07:46.462000 431200 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5225462Z [rank2]:E1204 12:07:46.462000 431200 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5225745Z [rank2]:E1204 12:07:46.462000 431200 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5225870Z [rank2]:E1204 12:07:46.462000 431200 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5226143Z [rank2]:E1204 12:07:46.462000 431200 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5226290Z [rank2]:E1204 12:07:46.462000 431200 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5226565Z [rank2]:E1204 12:07:46.462000 431200 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5226711Z [rank2]:E1204 12:07:46.462000 431200 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5226984Z [rank2]:E1204 12:07:46.462000 431200 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5227119Z [rank2]:E1204 12:07:46.462000 431200 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5227415Z [rank2]:E1204 12:07:46.462000 431200 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5227560Z [rank2]:E1204 12:07:46.462000 431200 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5227992Z [rank2]:E1204 12:07:46.462000 431200 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_no_gradients_cuda! Caching allocator allocated memory was 512 and is now reported as 6656 on device 2. CUDA driver allocated memory was 2300575744 and is now 2954887168. 2025-12-04T12:08:44.5228106Z [rank2]:E1204 12:07:46.462000 431200 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5228298Z [rank2]:E1204 12:07:46.462000 431200 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5228618Z [rank2]:E1204 12:07:46.462000 431200 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_no_gradients_cuda 2025-12-04T12:08:44.5228742Z [rank2]:E1204 12:07:46.462000 431200 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5228950Z [rank2]:E1204 12:07:46.462000 431200 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5229122Z [rank2]:E1204 12:07:46.462000 431200 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:08:44.5229162Z dist init r=2, world=4 2025-12-04T12:08:44.5229298Z [rank3]:E1204 12:07:46.469000 431201 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5229458Z [rank3]:E1204 12:07:46.469000 431201 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5229741Z [rank3]:E1204 12:07:46.469000 431201 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5229895Z [rank3]:E1204 12:07:46.469000 431201 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5230178Z [rank3]:E1204 12:07:46.469000 431201 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5230300Z [rank3]:E1204 12:07:46.469000 431201 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5230577Z [rank3]:E1204 12:07:46.469000 431201 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5230763Z [rank3]:E1204 12:07:46.469000 431201 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5231041Z [rank3]:E1204 12:07:46.469000 431201 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5231186Z [rank3]:E1204 12:07:46.469000 431201 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5231459Z [rank3]:E1204 12:07:46.469000 431201 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5231621Z [rank3]:E1204 12:07:46.469000 431201 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5231896Z [rank3]:E1204 12:07:46.469000 431201 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5232045Z [rank3]:E1204 12:07:46.469000 431201 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5232473Z [rank3]:E1204 12:07:46.469000 431201 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_no_gradients_cuda! Caching allocator allocated memory was 512 and is now reported as 6656 on device 3. CUDA driver allocated memory was 2250244096 and is now 2904555520. 2025-12-04T12:08:44.5232589Z [rank3]:E1204 12:07:46.469000 431201 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5232780Z [rank3]:E1204 12:07:46.469000 431201 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5233115Z [rank3]:E1204 12:07:46.469000 431201 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_no_gradients_cuda 2025-12-04T12:08:44.5233240Z [rank3]:E1204 12:07:46.469000 431201 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5233448Z [rank3]:E1204 12:07:46.469000 431201 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5233613Z [rank3]:E1204 12:07:46.469000 431201 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:08:44.5233651Z dist init r=3, world=4 2025-12-04T12:08:44.5233690Z FAILED [7.9194s] [100%] 2025-12-04T12:08:44.5233693Z 2025-12-04T12:08:44.5233747Z =================================== FAILURES =================================== 2025-12-04T12:08:44.5233837Z _________________ TestClipGradNormCUDA.test_no_gradients_cuda __________________ 2025-12-04T12:08:44.5233881Z Traceback (most recent call last): 2025-12-04T12:08:44.5234043Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:08:44.5234088Z self._join_processes(fn) 2025-12-04T12:08:44.5234260Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:08:44.5234313Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:08:44.5234491Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:08:44.5234535Z raise RuntimeError(error) 2025-12-04T12:08:44.5234616Z RuntimeError: Process 1 exited with error code 10 and exception: 2025-12-04T12:08:44.5234660Z Traceback (most recent call last): 2025-12-04T12:08:44.5234821Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5234862Z getattr(self, test_name)() 2025-12-04T12:08:44.5235021Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5235057Z fn() 2025-12-04T12:08:44.5235208Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5235249Z method(*args, **kwargs) 2025-12-04T12:08:44.5235423Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5235464Z method(*args, **kwargs) 2025-12-04T12:08:44.5235614Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5235652Z with policy(): 2025-12-04T12:08:44.5235802Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5235843Z raise RuntimeError(msg) 2025-12-04T12:08:44.5236149Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_no_gradients_cuda! Caching allocator allocated memory was 512 and is now reported as 6656 on device 1. CUDA driver allocated memory was 2317352960 and is now 2971664384. 2025-12-04T12:08:44.5236151Z 2025-12-04T12:08:44.5236227Z To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5236424Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_no_gradients_cuda 2025-12-04T12:08:44.5236426Z 2025-12-04T12:08:44.5236515Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5236517Z 2025-12-04T12:08:44.5236530Z 2025-12-04T12:08:44.5236604Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:08:44.5236693Z Process 1 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:08:44.5236963Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_clip_grad_norm/distributed.fsdp.test_fsdp_clip_grad_norm-08b4d7758bae657f.xml - 2025-12-04T12:08:44.5237024Z =========================== short test summary info ============================ 2025-12-04T12:08:44.5237244Z FAILED [7.9194s] distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_no_gradients_cuda - RuntimeError: Process 1 exited with error code 10 and exception: 2025-12-04T12:08:44.5237290Z Traceback (most recent call last): 2025-12-04T12:08:44.5237455Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5237498Z getattr(self, test_name)() 2025-12-04T12:08:44.5237658Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5237691Z fn() 2025-12-04T12:08:44.5237842Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5237884Z method(*args, **kwargs) 2025-12-04T12:08:44.5238036Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5238077Z method(*args, **kwargs) 2025-12-04T12:08:44.5238230Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5238266Z with policy(): 2025-12-04T12:08:44.5238419Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5238459Z raise RuntimeError(msg) 2025-12-04T12:08:44.5238768Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_no_gradients_cuda! Caching allocator allocated memory was 512 and is now reported as 6656 on device 1. CUDA driver allocated memory was 2317352960 and is now 2971664384. 2025-12-04T12:08:44.5238772Z 2025-12-04T12:08:44.5238846Z To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5239044Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_no_gradients_cuda 2025-12-04T12:08:44.5239046Z 2025-12-04T12:08:44.5239134Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5239219Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:08:44.5239281Z ======================= 1 failed, 3 deselected in 7.93s ======================== 2025-12-04T12:08:44.5239316Z Got exit code 1 2025-12-04T12:08:44.5239357Z Retrying single test... 2025-12-04T12:08:44.5239566Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_clip_grad_norm/distributed.fsdp.test_fsdp_clip_grad_norm-edcbc612a4e53459.xml 2025-12-04T12:08:44.5239625Z ============================= test session starts ============================== 2025-12-04T12:08:44.5239735Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:08:44.5239777Z cachedir: .pytest_cache 2025-12-04T12:08:44.5239931Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:08:44.5239979Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:08:44.5240019Z configfile: pytest.ini 2025-12-04T12:08:44.5240181Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:08:44.5240250Z collecting ... collected 4 items / 3 deselected / 1 selected 2025-12-04T12:08:44.5240463Z stepcurrent: skipping 2 already run items. Running only test/distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_no_gradients_cuda 2025-12-04T12:08:44.5240518Z Running 1 items in this shard 2025-12-04T12:08:44.5240520Z 2025-12-04T12:08:44.5240832Z distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_no_gradients_cuda I1204 12:07:50.140000 431507 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 431576 2025-12-04T12:08:44.5240984Z I1204 12:07:50.141000 431507 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 431577 2025-12-04T12:08:44.5241134Z I1204 12:07:50.141000 431507 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 431578 2025-12-04T12:08:44.5241287Z I1204 12:07:50.142000 431507 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 431579 2025-12-04T12:08:44.5241774Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5241838Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5242318Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5242377Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5242856Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5242913Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5243418Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5243475Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5243766Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/c10d_logger.py:83: UserWarning: barrier(): using the device under current context. You can specify `device_id` in `init_process_group` to mute this warning. 2025-12-04T12:08:44.5243811Z return func(*args, **kwargs) 2025-12-04T12:08:44.5244290Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 0, which does not have an explicit index. FSDP will use the current device 0. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5244348Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5244823Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 3, which does not have an explicit index. FSDP will use the current device 3. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5244908Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5245383Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 2, which does not have an explicit index. FSDP will use the current device 2. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5245443Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5245919Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:571: UserWarning: FSDP got the argument `device_id` cuda on rank 1, which does not have an explicit index. FSDP will use the current device 1. If this is incorrect, please explicitly call `torch.cuda.set_device()` before FSDP initialization or pass in the explicit device index as the `device_id` argument. 2025-12-04T12:08:44.5245976Z device_from_device_id = _get_device_from_device_id( 2025-12-04T12:08:44.5246261Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/c10d_logger.py:83: UserWarning: barrier(): using the device under current context. You can specify `device_id` in `init_process_group` to mute this warning. 2025-12-04T12:08:44.5246302Z return func(*args, **kwargs) 2025-12-04T12:08:44.5246446Z [rank0]:E1204 12:07:56.770000 431576 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5246605Z [rank0]:E1204 12:07:56.770000 431576 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5246892Z [rank0]:E1204 12:07:56.770000 431576 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5247045Z [rank0]:E1204 12:07:56.770000 431576 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5247326Z [rank0]:E1204 12:07:56.770000 431576 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5247450Z [rank0]:E1204 12:07:56.770000 431576 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5247742Z [rank0]:E1204 12:07:56.770000 431576 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5247891Z [rank0]:E1204 12:07:56.770000 431576 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5248164Z [rank0]:E1204 12:07:56.770000 431576 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5248314Z [rank0]:E1204 12:07:56.770000 431576 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5248593Z [rank0]:E1204 12:07:56.770000 431576 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5248728Z [rank0]:E1204 12:07:56.770000 431576 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5249001Z [rank0]:E1204 12:07:56.770000 431576 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5249158Z [rank0]:E1204 12:07:56.770000 431576 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5249600Z [rank0]:E1204 12:07:56.770000 431576 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_no_gradients_cuda! Caching allocator allocated memory was 512 and is now reported as 6656 on device 0. CUDA driver allocated memory was 2459959296 and is now 3114270720. 2025-12-04T12:08:44.5249715Z [rank0]:E1204 12:07:56.770000 431576 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5249908Z [rank0]:E1204 12:07:56.770000 431576 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5250231Z [rank0]:E1204 12:07:56.770000 431576 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_no_gradients_cuda 2025-12-04T12:08:44.5250343Z [rank0]:E1204 12:07:56.770000 431576 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5250554Z [rank0]:E1204 12:07:56.770000 431576 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5250850Z [rank0]:E1204 12:07:56.770000 431576 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:08:44.5250891Z dist init r=0, world=4 2025-12-04T12:08:44.5251026Z [rank3]:E1204 12:07:56.773000 431579 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5251187Z [rank3]:E1204 12:07:56.773000 431579 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5251470Z [rank3]:E1204 12:07:56.773000 431579 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5251625Z [rank3]:E1204 12:07:56.773000 431579 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5251937Z [rank3]:E1204 12:07:56.773000 431579 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5252058Z [rank3]:E1204 12:07:56.773000 431579 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5252332Z [rank3]:E1204 12:07:56.773000 431579 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5252477Z [rank3]:E1204 12:07:56.773000 431579 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5252749Z [rank3]:E1204 12:07:56.773000 431579 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5252895Z [rank3]:E1204 12:07:56.773000 431579 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5253170Z [rank3]:E1204 12:07:56.773000 431579 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5253319Z [rank3]:E1204 12:07:56.773000 431579 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5253592Z [rank3]:E1204 12:07:56.773000 431579 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5253754Z [rank3]:E1204 12:07:56.773000 431579 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5254184Z [rank3]:E1204 12:07:56.773000 431579 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_no_gradients_cuda! Caching allocator allocated memory was 512 and is now reported as 6656 on device 3. CUDA driver allocated memory was 2250244096 and is now 2904555520. 2025-12-04T12:08:44.5254300Z [rank3]:E1204 12:07:56.773000 431579 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5254491Z [rank3]:E1204 12:07:56.773000 431579 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5254821Z [rank3]:E1204 12:07:56.773000 431579 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_no_gradients_cuda 2025-12-04T12:08:44.5254933Z [rank3]:E1204 12:07:56.773000 431579 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5255142Z [rank3]:E1204 12:07:56.773000 431579 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5255305Z [rank3]:E1204 12:07:56.773000 431579 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:08:44.5255442Z [rank2]:E1204 12:07:56.773000 431578 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5255600Z [rank2]:E1204 12:07:56.773000 431578 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5255883Z [rank2]:E1204 12:07:56.773000 431578 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5256063Z [rank2]:E1204 12:07:56.773000 431578 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5256345Z [rank2]:E1204 12:07:56.773000 431578 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5256468Z [rank2]:E1204 12:07:56.773000 431578 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5256741Z [rank2]:E1204 12:07:56.773000 431578 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5256887Z [rank2]:E1204 12:07:56.773000 431578 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5257160Z [rank2]:E1204 12:07:56.773000 431578 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5257304Z [rank2]:E1204 12:07:56.773000 431578 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5257592Z [rank2]:E1204 12:07:56.773000 431578 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5257737Z [rank2]:E1204 12:07:56.773000 431578 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5258012Z [rank2]:E1204 12:07:56.773000 431578 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5258160Z [rank2]:E1204 12:07:56.773000 431578 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5258588Z [rank2]:E1204 12:07:56.773000 431578 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_no_gradients_cuda! Caching allocator allocated memory was 512 and is now reported as 6656 on device 2. CUDA driver allocated memory was 2300575744 and is now 2954887168. 2025-12-04T12:08:44.5258702Z [rank2]:E1204 12:07:56.773000 431578 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5258895Z [rank2]:E1204 12:07:56.773000 431578 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5259219Z [rank2]:E1204 12:07:56.773000 431578 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_no_gradients_cuda 2025-12-04T12:08:44.5259330Z [rank2]:E1204 12:07:56.773000 431578 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5259537Z [rank2]:E1204 12:07:56.773000 431578 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5259701Z [rank2]:E1204 12:07:56.773000 431578 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:08:44.5259740Z dist init r=3, world=4 2025-12-04T12:08:44.5259781Z dist init r=2, world=4 2025-12-04T12:08:44.5259918Z [rank1]:E1204 12:07:56.822000 431577 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5260076Z [rank1]:E1204 12:07:56.822000 431577 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5260381Z [rank1]:E1204 12:07:56.822000 431577 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5260533Z [rank1]:E1204 12:07:56.822000 431577 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5260847Z [rank1]:E1204 12:07:56.822000 431577 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5260971Z [rank1]:E1204 12:07:56.822000 431577 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5261249Z [rank1]:E1204 12:07:56.822000 431577 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5261395Z [rank1]:E1204 12:07:56.822000 431577 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5261670Z [rank1]:E1204 12:07:56.822000 431577 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5261829Z [rank1]:E1204 12:07:56.822000 431577 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5262119Z [rank1]:E1204 12:07:56.822000 431577 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5262253Z [rank1]:E1204 12:07:56.822000 431577 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5262531Z [rank1]:E1204 12:07:56.822000 431577 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5262679Z [rank1]:E1204 12:07:56.822000 431577 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5263108Z [rank1]:E1204 12:07:56.822000 431577 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_no_gradients_cuda! Caching allocator allocated memory was 512 and is now reported as 6656 on device 1. CUDA driver allocated memory was 2317352960 and is now 2971664384. 2025-12-04T12:08:44.5263226Z [rank1]:E1204 12:07:56.822000 431577 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5263420Z [rank1]:E1204 12:07:56.822000 431577 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5263744Z [rank1]:E1204 12:07:56.822000 431577 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_no_gradients_cuda 2025-12-04T12:08:44.5263857Z [rank1]:E1204 12:07:56.822000 431577 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5264069Z [rank1]:E1204 12:07:56.822000 431577 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5264236Z [rank1]:E1204 12:07:56.822000 431577 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:08:44.5264276Z dist init r=1, world=4 2025-12-04T12:08:44.5264316Z FAILED [7.6183s] [100%] 2025-12-04T12:08:44.5264318Z 2025-12-04T12:08:44.5264400Z =================================== FAILURES =================================== 2025-12-04T12:08:44.5264492Z _________________ TestClipGradNormCUDA.test_no_gradients_cuda __________________ 2025-12-04T12:08:44.5264539Z Traceback (most recent call last): 2025-12-04T12:08:44.5264704Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:08:44.5264748Z self._join_processes(fn) 2025-12-04T12:08:44.5264923Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:08:44.5264978Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:08:44.5265157Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:08:44.5265201Z raise RuntimeError(error) 2025-12-04T12:08:44.5265283Z RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:08:44.5265328Z Traceback (most recent call last): 2025-12-04T12:08:44.5265489Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5265543Z getattr(self, test_name)() 2025-12-04T12:08:44.5265702Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5265752Z fn() 2025-12-04T12:08:44.5265905Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5265947Z method(*args, **kwargs) 2025-12-04T12:08:44.5266099Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5266138Z method(*args, **kwargs) 2025-12-04T12:08:44.5266290Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5266328Z with policy(): 2025-12-04T12:08:44.5266478Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5266522Z raise RuntimeError(msg) 2025-12-04T12:08:44.5266828Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_no_gradients_cuda! Caching allocator allocated memory was 512 and is now reported as 6656 on device 0. CUDA driver allocated memory was 2459959296 and is now 3114270720. 2025-12-04T12:08:44.5266831Z 2025-12-04T12:08:44.5266907Z To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5267103Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_no_gradients_cuda 2025-12-04T12:08:44.5267105Z 2025-12-04T12:08:44.5267194Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5267196Z 2025-12-04T12:08:44.5267198Z 2025-12-04T12:08:44.5267271Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:08:44.5267361Z Process 0 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:08:44.5267617Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_clip_grad_norm/distributed.fsdp.test_fsdp_clip_grad_norm-edcbc612a4e53459.xml - 2025-12-04T12:08:44.5267681Z =========================== short test summary info ============================ 2025-12-04T12:08:44.5267899Z FAILED [7.6183s] distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_no_gradients_cuda - RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:08:44.5267945Z Traceback (most recent call last): 2025-12-04T12:08:44.5268131Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5268176Z getattr(self, test_name)() 2025-12-04T12:08:44.5268339Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5268375Z fn() 2025-12-04T12:08:44.5268531Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5268571Z method(*args, **kwargs) 2025-12-04T12:08:44.5268724Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5268765Z method(*args, **kwargs) 2025-12-04T12:08:44.5268914Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5268950Z with policy(): 2025-12-04T12:08:44.5269103Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5269143Z raise RuntimeError(msg) 2025-12-04T12:08:44.5269451Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_no_gradients_cuda! Caching allocator allocated memory was 512 and is now reported as 6656 on device 0. CUDA driver allocated memory was 2459959296 and is now 3114270720. 2025-12-04T12:08:44.5269463Z 2025-12-04T12:08:44.5269539Z To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5269750Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_no_gradients_cuda 2025-12-04T12:08:44.5269752Z 2025-12-04T12:08:44.5269839Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5269902Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:08:44.5269968Z ======================= 1 failed, 3 deselected in 7.64s ======================== 2025-12-04T12:08:44.5270005Z Got exit code 1 2025-12-04T12:08:44.5270157Z FAILED CONSISTENTLY: test/distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_no_gradients_cuda 2025-12-04T12:08:44.5270284Z Test failed consistently, continuing with the rest of the tests due to continue-through-error being set 2025-12-04T12:08:44.5270498Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_clip_grad_norm/distributed.fsdp.test_fsdp_clip_grad_norm-b8d1dd4a4d36a04a.xml 2025-12-04T12:08:44.5270558Z ============================= test session starts ============================== 2025-12-04T12:08:44.5270704Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:08:44.5270746Z cachedir: .pytest_cache 2025-12-04T12:08:44.5270905Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:08:44.5270951Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:08:44.5270992Z configfile: pytest.ini 2025-12-04T12:08:44.5271150Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:08:44.5271225Z collecting ... collected 4 items / 3 deselected / 1 selected 2025-12-04T12:08:44.5271278Z stepcurrent: skipping 3 already run items. 2025-12-04T12:08:44.5271324Z Running 1 items in this shard 2025-12-04T12:08:44.5271327Z 2025-12-04T12:08:44.5271602Z distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_non_root_cuda I1204 12:08:00.238000 431885 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 431954 2025-12-04T12:08:44.5271755Z I1204 12:08:00.239000 431885 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 431955 2025-12-04T12:08:44.5271937Z I1204 12:08:00.239000 431885 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 431956 2025-12-04T12:08:44.5272085Z I1204 12:08:00.240000 431885 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 431957 2025-12-04T12:08:44.5272226Z [rank2]:E1204 12:08:10.170000 431956 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5272387Z [rank2]:E1204 12:08:10.170000 431956 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5272675Z [rank2]:E1204 12:08:10.170000 431956 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5272828Z [rank2]:E1204 12:08:10.170000 431956 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5273118Z [rank2]:E1204 12:08:10.170000 431956 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5273243Z [rank2]:E1204 12:08:10.170000 431956 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5273531Z [rank2]:E1204 12:08:10.170000 431956 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5273695Z [rank2]:E1204 12:08:10.170000 431956 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5273969Z [rank2]:E1204 12:08:10.170000 431956 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5274115Z [rank2]:E1204 12:08:10.170000 431956 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5274388Z [rank2]:E1204 12:08:10.170000 431956 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5274526Z [rank2]:E1204 12:08:10.170000 431956 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5274803Z [rank2]:E1204 12:08:10.170000 431956 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5274948Z [rank2]:E1204 12:08:10.170000 431956 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5275378Z [rank2]:E1204 12:08:10.170000 431956 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_non_root_cuda! Caching allocator allocated memory was 512 and is now reported as 2560 on device 2. CUDA driver allocated memory was 2300575744 and is now 3258974208. 2025-12-04T12:08:44.5275493Z [rank2]:E1204 12:08:10.170000 431956 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5275690Z [rank2]:E1204 12:08:10.170000 431956 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5276007Z [rank2]:E1204 12:08:10.170000 431956 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_non_root_cuda 2025-12-04T12:08:44.5276142Z [rank2]:E1204 12:08:10.170000 431956 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5276352Z [rank2]:E1204 12:08:10.170000 431956 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5276514Z [rank2]:E1204 12:08:10.170000 431956 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:08:44.5276556Z dist init r=2, world=4 2025-12-04T12:08:44.5276692Z [rank0]:E1204 12:08:10.190000 431954 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5276853Z [rank0]:E1204 12:08:10.190000 431954 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5277138Z [rank0]:E1204 12:08:10.190000 431954 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5277291Z [rank0]:E1204 12:08:10.190000 431954 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5277575Z [rank0]:E1204 12:08:10.190000 431954 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5277710Z [rank0]:E1204 12:08:10.190000 431954 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5277996Z [rank0]:E1204 12:08:10.190000 431954 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5278142Z [rank0]:E1204 12:08:10.190000 431954 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5278417Z [rank0]:E1204 12:08:10.190000 431954 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5278563Z [rank0]:E1204 12:08:10.190000 431954 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5278838Z [rank0]:E1204 12:08:10.190000 431954 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5278975Z [rank0]:E1204 12:08:10.190000 431954 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5279255Z [rank0]:E1204 12:08:10.190000 431954 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5279403Z [rank0]:E1204 12:08:10.190000 431954 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5279828Z [rank0]:E1204 12:08:10.190000 431954 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_non_root_cuda! Caching allocator allocated memory was 512 and is now reported as 2560 on device 0. CUDA driver allocated memory was 2459959296 and is now 3418357760. 2025-12-04T12:08:44.5279948Z [rank0]:E1204 12:08:10.190000 431954 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5280142Z [rank0]:E1204 12:08:10.190000 431954 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5280484Z [rank0]:E1204 12:08:10.190000 431954 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_non_root_cuda 2025-12-04T12:08:44.5280630Z [rank0]:E1204 12:08:10.190000 431954 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5280844Z [rank0]:E1204 12:08:10.190000 431954 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5281010Z [rank0]:E1204 12:08:10.190000 431954 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:08:44.5281048Z dist init r=0, world=4 2025-12-04T12:08:44.5281187Z [rank1]:E1204 12:08:10.270000 431955 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5281348Z [rank1]:E1204 12:08:10.270000 431955 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5281631Z [rank1]:E1204 12:08:10.270000 431955 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5281797Z [rank1]:E1204 12:08:10.270000 431955 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5282084Z [rank1]:E1204 12:08:10.270000 431955 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5282224Z [rank1]:E1204 12:08:10.270000 431955 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5282503Z [rank1]:E1204 12:08:10.270000 431955 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5282650Z [rank1]:E1204 12:08:10.270000 431955 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5282923Z [rank1]:E1204 12:08:10.270000 431955 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5283073Z [rank1]:E1204 12:08:10.270000 431955 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5283346Z [rank1]:E1204 12:08:10.270000 431955 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5283484Z [rank1]:E1204 12:08:10.270000 431955 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5283757Z [rank1]:E1204 12:08:10.270000 431955 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5283905Z [rank1]:E1204 12:08:10.270000 431955 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5284333Z [rank1]:E1204 12:08:10.270000 431955 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_non_root_cuda! Caching allocator allocated memory was 512 and is now reported as 2560 on device 1. CUDA driver allocated memory was 2317352960 and is now 3275751424. 2025-12-04T12:08:44.5284447Z [rank1]:E1204 12:08:10.270000 431955 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5284667Z [rank1]:E1204 12:08:10.270000 431955 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5284985Z [rank1]:E1204 12:08:10.270000 431955 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_non_root_cuda 2025-12-04T12:08:44.5285099Z [rank1]:E1204 12:08:10.270000 431955 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5285309Z [rank1]:E1204 12:08:10.270000 431955 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5285474Z [rank1]:E1204 12:08:10.270000 431955 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:08:44.5285511Z dist init r=1, world=4 2025-12-04T12:08:44.5285650Z [rank3]:E1204 12:08:10.271000 431957 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5285810Z [rank3]:E1204 12:08:10.271000 431957 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5286104Z [rank3]:E1204 12:08:10.271000 431957 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5286277Z [rank3]:E1204 12:08:10.271000 431957 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5286558Z [rank3]:E1204 12:08:10.271000 431957 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5286683Z [rank3]:E1204 12:08:10.271000 431957 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5286958Z [rank3]:E1204 12:08:10.271000 431957 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5287109Z [rank3]:E1204 12:08:10.271000 431957 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5287385Z [rank3]:E1204 12:08:10.271000 431957 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5287530Z [rank3]:E1204 12:08:10.271000 431957 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5287806Z [rank3]:E1204 12:08:10.271000 431957 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5287940Z [rank3]:E1204 12:08:10.271000 431957 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5288218Z [rank3]:E1204 12:08:10.271000 431957 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5288364Z [rank3]:E1204 12:08:10.271000 431957 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5288808Z [rank3]:E1204 12:08:10.271000 431957 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_non_root_cuda! Caching allocator allocated memory was 512 and is now reported as 2560 on device 3. CUDA driver allocated memory was 2250244096 and is now 3208642560. 2025-12-04T12:08:44.5288924Z [rank3]:E1204 12:08:10.271000 431957 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5289118Z [rank3]:E1204 12:08:10.271000 431957 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5289437Z [rank3]:E1204 12:08:10.271000 431957 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_non_root_cuda 2025-12-04T12:08:44.5289551Z [rank3]:E1204 12:08:10.271000 431957 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5289764Z [rank3]:E1204 12:08:10.271000 431957 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5289927Z [rank3]:E1204 12:08:10.271000 431957 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:08:44.5289969Z dist init r=3, world=4 2025-12-04T12:08:44.5290313Z [rank0]:[W1204 12:08:10.245502225 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:08:44.5290376Z FAILED [12.0271s] [100%] 2025-12-04T12:08:44.5290379Z 2025-12-04T12:08:44.5290438Z =================================== FAILURES =================================== 2025-12-04T12:08:44.5290526Z ___________________ TestClipGradNormCUDA.test_non_root_cuda ____________________ 2025-12-04T12:08:44.5290575Z Traceback (most recent call last): 2025-12-04T12:08:44.5290773Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:08:44.5290821Z self._join_processes(fn) 2025-12-04T12:08:44.5290991Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:08:44.5291048Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:08:44.5291225Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:08:44.5291273Z raise RuntimeError(error) 2025-12-04T12:08:44.5291352Z RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:08:44.5291401Z Traceback (most recent call last): 2025-12-04T12:08:44.5291561Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5291604Z getattr(self, test_name)() 2025-12-04T12:08:44.5291763Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5291800Z fn() 2025-12-04T12:08:44.5291951Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5291995Z method(*args, **kwargs) 2025-12-04T12:08:44.5292143Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5292188Z method(*args, **kwargs) 2025-12-04T12:08:44.5292336Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5292374Z with policy(): 2025-12-04T12:08:44.5292523Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5292566Z raise RuntimeError(msg) 2025-12-04T12:08:44.5292900Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_non_root_cuda! Caching allocator allocated memory was 512 and is now reported as 2560 on device 0. CUDA driver allocated memory was 2459959296 and is now 3418357760. 2025-12-04T12:08:44.5292902Z 2025-12-04T12:08:44.5292980Z To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5293174Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_non_root_cuda 2025-12-04T12:08:44.5293178Z 2025-12-04T12:08:44.5293265Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5293268Z 2025-12-04T12:08:44.5293269Z 2025-12-04T12:08:44.5293350Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:08:44.5293438Z Process 0 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:08:44.5293699Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_clip_grad_norm/distributed.fsdp.test_fsdp_clip_grad_norm-b8d1dd4a4d36a04a.xml - 2025-12-04T12:08:44.5293760Z =========================== short test summary info ============================ 2025-12-04T12:08:44.5293988Z FAILED [12.0271s] distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_non_root_cuda - RuntimeError: Process 0 exited with error code 10 and exception: 2025-12-04T12:08:44.5294036Z Traceback (most recent call last): 2025-12-04T12:08:44.5294216Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5294261Z getattr(self, test_name)() 2025-12-04T12:08:44.5294420Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5294458Z fn() 2025-12-04T12:08:44.5294611Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5294655Z method(*args, **kwargs) 2025-12-04T12:08:44.5294805Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5294847Z method(*args, **kwargs) 2025-12-04T12:08:44.5294996Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5295033Z with policy(): 2025-12-04T12:08:44.5295185Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5295229Z raise RuntimeError(msg) 2025-12-04T12:08:44.5295530Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_non_root_cuda! Caching allocator allocated memory was 512 and is now reported as 2560 on device 0. CUDA driver allocated memory was 2459959296 and is now 3418357760. 2025-12-04T12:08:44.5295534Z 2025-12-04T12:08:44.5295610Z To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5295803Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_non_root_cuda 2025-12-04T12:08:44.5295809Z 2025-12-04T12:08:44.5295896Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5295962Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:08:44.5296025Z ======================= 1 failed, 3 deselected in 12.05s ======================= 2025-12-04T12:08:44.5296066Z Got exit code 1 2025-12-04T12:08:44.5296106Z Retrying single test... 2025-12-04T12:08:44.5296320Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_clip_grad_norm/distributed.fsdp.test_fsdp_clip_grad_norm-3e004d750aa6d482.xml 2025-12-04T12:08:44.5296399Z ============================= test session starts ============================== 2025-12-04T12:08:44.5296513Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:08:44.5296553Z cachedir: .pytest_cache 2025-12-04T12:08:44.5296710Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:08:44.5296757Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:08:44.5296798Z configfile: pytest.ini 2025-12-04T12:08:44.5296957Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:08:44.5297029Z collecting ... collected 4 items / 3 deselected / 1 selected 2025-12-04T12:08:44.5297215Z stepcurrent: skipping 3 already run items. Running only test/distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_non_root_cuda 2025-12-04T12:08:44.5297259Z Running 1 items in this shard 2025-12-04T12:08:44.5297262Z 2025-12-04T12:08:44.5297532Z distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_non_root_cuda I1204 12:08:14.989000 432287 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 432356 2025-12-04T12:08:44.5297700Z I1204 12:08:14.989000 432287 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 432357 2025-12-04T12:08:44.5297855Z I1204 12:08:14.990000 432287 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 432358 2025-12-04T12:08:44.5298017Z I1204 12:08:14.990000 432287 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 432359 2025-12-04T12:08:44.5298158Z [rank3]:E1204 12:08:24.784000 432359 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5298317Z [rank3]:E1204 12:08:24.784000 432359 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5298606Z [rank3]:E1204 12:08:24.784000 432359 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5298760Z [rank3]:E1204 12:08:24.784000 432359 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5299043Z [rank3]:E1204 12:08:24.784000 432359 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5299167Z [rank3]:E1204 12:08:24.784000 432359 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5299445Z [rank3]:E1204 12:08:24.784000 432359 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5299592Z [rank3]:E1204 12:08:24.784000 432359 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5299866Z [rank3]:E1204 12:08:24.784000 432359 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5300014Z [rank3]:E1204 12:08:24.784000 432359 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5300288Z [rank3]:E1204 12:08:24.784000 432359 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5300446Z [rank3]:E1204 12:08:24.784000 432359 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5300757Z [rank3]:E1204 12:08:24.784000 432359 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5300905Z [rank3]:E1204 12:08:24.784000 432359 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5301333Z [rank3]:E1204 12:08:24.784000 432359 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_non_root_cuda! Caching allocator allocated memory was 512 and is now reported as 2560 on device 3. CUDA driver allocated memory was 2250244096 and is now 3208642560. 2025-12-04T12:08:44.5301449Z [rank3]:E1204 12:08:24.784000 432359 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5301645Z [rank3]:E1204 12:08:24.784000 432359 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5301963Z [rank3]:E1204 12:08:24.784000 432359 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_non_root_cuda 2025-12-04T12:08:44.5302096Z [rank3]:E1204 12:08:24.784000 432359 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5302323Z [rank3]:E1204 12:08:24.784000 432359 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5302489Z [rank3]:E1204 12:08:24.784000 432359 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:08:44.5302529Z dist init r=3, world=4 2025-12-04T12:08:44.5302668Z [rank1]:E1204 12:08:24.869000 432357 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5302827Z [rank1]:E1204 12:08:24.869000 432357 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5303113Z [rank1]:E1204 12:08:24.869000 432357 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5303268Z [rank1]:E1204 12:08:24.869000 432357 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5303548Z [rank1]:E1204 12:08:24.869000 432357 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5303671Z [rank1]:E1204 12:08:24.869000 432357 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5303945Z [rank1]:E1204 12:08:24.869000 432357 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5304093Z [rank1]:E1204 12:08:24.869000 432357 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5304367Z [rank1]:E1204 12:08:24.869000 432357 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5304514Z [rank1]:E1204 12:08:24.869000 432357 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5304815Z [rank1]:E1204 12:08:24.869000 432357 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5304950Z [rank1]:E1204 12:08:24.869000 432357 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5305226Z [rank1]:E1204 12:08:24.869000 432357 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5305373Z [rank1]:E1204 12:08:24.869000 432357 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5305799Z [rank1]:E1204 12:08:24.869000 432357 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_non_root_cuda! Caching allocator allocated memory was 512 and is now reported as 2560 on device 1. CUDA driver allocated memory was 2317352960 and is now 3275751424. 2025-12-04T12:08:44.5305913Z [rank1]:E1204 12:08:24.869000 432357 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5306104Z [rank1]:E1204 12:08:24.869000 432357 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5306431Z [rank1]:E1204 12:08:24.869000 432357 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_non_root_cuda 2025-12-04T12:08:44.5306553Z [rank1]:E1204 12:08:24.869000 432357 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5306763Z [rank1]:E1204 12:08:24.869000 432357 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5306927Z [rank1]:E1204 12:08:24.869000 432357 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:08:44.5306967Z dist init r=1, world=4 2025-12-04T12:08:44.5307103Z [rank2]:E1204 12:08:24.869000 432358 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5307262Z [rank2]:E1204 12:08:24.869000 432358 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5307547Z [rank2]:E1204 12:08:24.869000 432358 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5307697Z [rank2]:E1204 12:08:24.869000 432358 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5307983Z [rank2]:E1204 12:08:24.869000 432358 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5308105Z [rank2]:E1204 12:08:24.869000 432358 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5308380Z [rank2]:E1204 12:08:24.869000 432358 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5308526Z [rank2]:E1204 12:08:24.869000 432358 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5308803Z [rank2]:E1204 12:08:24.869000 432358 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5308971Z [rank2]:E1204 12:08:24.869000 432358 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5309247Z [rank2]:E1204 12:08:24.869000 432358 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5309381Z [rank2]:E1204 12:08:24.869000 432358 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5309658Z [rank2]:E1204 12:08:24.869000 432358 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5309806Z [rank2]:E1204 12:08:24.869000 432358 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5310229Z [rank2]:E1204 12:08:24.869000 432358 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_non_root_cuda! Caching allocator allocated memory was 512 and is now reported as 2560 on device 2. CUDA driver allocated memory was 2300575744 and is now 3258974208. 2025-12-04T12:08:44.5310356Z [rank2]:E1204 12:08:24.869000 432358 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5310550Z [rank2]:E1204 12:08:24.869000 432358 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5310912Z [rank2]:E1204 12:08:24.869000 432358 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_non_root_cuda 2025-12-04T12:08:44.5311026Z [rank2]:E1204 12:08:24.869000 432358 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5311235Z [rank2]:E1204 12:08:24.869000 432358 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5311402Z [rank2]:E1204 12:08:24.869000 432358 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:08:44.5311441Z dist init r=2, world=4 2025-12-04T12:08:44.5311581Z [rank0]:E1204 12:08:24.914000 432356 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5311738Z [rank0]:E1204 12:08:24.914000 432356 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5312024Z [rank0]:E1204 12:08:24.914000 432356 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5312177Z [rank0]:E1204 12:08:24.914000 432356 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5312457Z [rank0]:E1204 12:08:24.914000 432356 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5312583Z [rank0]:E1204 12:08:24.914000 432356 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5312858Z [rank0]:E1204 12:08:24.914000 432356 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5313004Z [rank0]:E1204 12:08:24.914000 432356 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5313310Z [rank0]:E1204 12:08:24.914000 432356 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5313455Z [rank0]:E1204 12:08:24.914000 432356 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5313734Z [rank0]:E1204 12:08:24.914000 432356 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5313869Z [rank0]:E1204 12:08:24.914000 432356 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5314146Z [rank0]:E1204 12:08:24.914000 432356 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5314291Z [rank0]:E1204 12:08:24.914000 432356 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5314718Z [rank0]:E1204 12:08:24.914000 432356 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_non_root_cuda! Caching allocator allocated memory was 512 and is now reported as 2560 on device 0. CUDA driver allocated memory was 2459959296 and is now 3418357760. 2025-12-04T12:08:44.5314858Z [rank0]:E1204 12:08:24.914000 432356 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5315054Z [rank0]:E1204 12:08:24.914000 432356 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5315371Z [rank0]:E1204 12:08:24.914000 432356 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_non_root_cuda 2025-12-04T12:08:44.5315482Z [rank0]:E1204 12:08:24.914000 432356 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5315694Z [rank0]:E1204 12:08:24.914000 432356 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5315860Z [rank0]:E1204 12:08:24.914000 432356 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:08:44.5315898Z dist init r=0, world=4 2025-12-04T12:08:44.5316230Z [rank0]:[W1204 12:08:25.036714309 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:08:44.5316276Z FAILED [11.7254s] [100%] 2025-12-04T12:08:44.5316278Z 2025-12-04T12:08:44.5316332Z =================================== FAILURES =================================== 2025-12-04T12:08:44.5316420Z ___________________ TestClipGradNormCUDA.test_non_root_cuda ____________________ 2025-12-04T12:08:44.5316468Z Traceback (most recent call last): 2025-12-04T12:08:44.5316630Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:08:44.5316676Z self._join_processes(fn) 2025-12-04T12:08:44.5316846Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:08:44.5316900Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:08:44.5317075Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:08:44.5317150Z raise RuntimeError(error) 2025-12-04T12:08:44.5317230Z RuntimeError: Process 3 exited with error code 10 and exception: 2025-12-04T12:08:44.5317276Z Traceback (most recent call last): 2025-12-04T12:08:44.5317435Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5317478Z getattr(self, test_name)() 2025-12-04T12:08:44.5317634Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5317670Z fn() 2025-12-04T12:08:44.5317819Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5317862Z method(*args, **kwargs) 2025-12-04T12:08:44.5318012Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5318053Z method(*args, **kwargs) 2025-12-04T12:08:44.5318201Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5318240Z with policy(): 2025-12-04T12:08:44.5318388Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5318440Z raise RuntimeError(msg) 2025-12-04T12:08:44.5318738Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_non_root_cuda! Caching allocator allocated memory was 512 and is now reported as 2560 on device 3. CUDA driver allocated memory was 2250244096 and is now 3208642560. 2025-12-04T12:08:44.5318755Z 2025-12-04T12:08:44.5318831Z To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5319025Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_non_root_cuda 2025-12-04T12:08:44.5319029Z 2025-12-04T12:08:44.5319116Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5319118Z 2025-12-04T12:08:44.5319120Z 2025-12-04T12:08:44.5319196Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:08:44.5319283Z Process 3 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:08:44.5319540Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_clip_grad_norm/distributed.fsdp.test_fsdp_clip_grad_norm-3e004d750aa6d482.xml - 2025-12-04T12:08:44.5319601Z =========================== short test summary info ============================ 2025-12-04T12:08:44.5319812Z FAILED [11.7254s] distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_non_root_cuda - RuntimeError: Process 3 exited with error code 10 and exception: 2025-12-04T12:08:44.5319858Z Traceback (most recent call last): 2025-12-04T12:08:44.5320025Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5320067Z getattr(self, test_name)() 2025-12-04T12:08:44.5320228Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5320263Z fn() 2025-12-04T12:08:44.5320413Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5320456Z method(*args, **kwargs) 2025-12-04T12:08:44.5320642Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5320682Z method(*args, **kwargs) 2025-12-04T12:08:44.5320831Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5320869Z with policy(): 2025-12-04T12:08:44.5321053Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5321096Z raise RuntimeError(msg) 2025-12-04T12:08:44.5321396Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_non_root_cuda! Caching allocator allocated memory was 512 and is now reported as 2560 on device 3. CUDA driver allocated memory was 2250244096 and is now 3208642560. 2025-12-04T12:08:44.5321400Z 2025-12-04T12:08:44.5321475Z To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5321665Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_non_root_cuda 2025-12-04T12:08:44.5321668Z 2025-12-04T12:08:44.5321757Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5321823Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:08:44.5321884Z ======================= 1 failed, 3 deselected in 11.73s ======================= 2025-12-04T12:08:44.5321919Z Got exit code 1 2025-12-04T12:08:44.5321960Z Retrying single test... 2025-12-04T12:08:44.5322168Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_clip_grad_norm/distributed.fsdp.test_fsdp_clip_grad_norm-ce48c1e6fdab5aa9.xml 2025-12-04T12:08:44.5322242Z ============================= test session starts ============================== 2025-12-04T12:08:44.5322374Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:08:44.5322414Z cachedir: .pytest_cache 2025-12-04T12:08:44.5322571Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:08:44.5322616Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:08:44.5322656Z configfile: pytest.ini 2025-12-04T12:08:44.5322816Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:08:44.5322888Z collecting ... collected 4 items / 3 deselected / 1 selected 2025-12-04T12:08:44.5323074Z stepcurrent: skipping 3 already run items. Running only test/distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_non_root_cuda 2025-12-04T12:08:44.5323119Z Running 1 items in this shard 2025-12-04T12:08:44.5323121Z 2025-12-04T12:08:44.5323389Z distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_non_root_cuda I1204 12:08:29.357000 432689 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 0 with pid 432758 2025-12-04T12:08:44.5323545Z I1204 12:08:29.358000 432689 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 1 with pid 432759 2025-12-04T12:08:44.5323697Z I1204 12:08:29.359000 432689 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 2 with pid 432760 2025-12-04T12:08:44.5323846Z I1204 12:08:29.359000 432689 site-packages/torch/testing/_internal/common_distributed.py:849] Started process 3 with pid 432761 2025-12-04T12:08:44.5323985Z [rank3]:E1204 12:08:39.190000 432761 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5324147Z [rank3]:E1204 12:08:39.190000 432761 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5324434Z [rank3]:E1204 12:08:39.190000 432761 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5324587Z [rank3]:E1204 12:08:39.190000 432761 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5324896Z [rank3]:E1204 12:08:39.190000 432761 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5325019Z [rank3]:E1204 12:08:39.190000 432761 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5325295Z [rank3]:E1204 12:08:39.190000 432761 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5325443Z [rank3]:E1204 12:08:39.190000 432761 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5325719Z [rank3]:E1204 12:08:39.190000 432761 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5325865Z [rank3]:E1204 12:08:39.190000 432761 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5326138Z [rank3]:E1204 12:08:39.190000 432761 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5326282Z [rank3]:E1204 12:08:39.190000 432761 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5326569Z [rank3]:E1204 12:08:39.190000 432761 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5326716Z [rank3]:E1204 12:08:39.190000 432761 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5327143Z [rank3]:E1204 12:08:39.190000 432761 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_non_root_cuda! Caching allocator allocated memory was 512 and is now reported as 2560 on device 3. CUDA driver allocated memory was 2250244096 and is now 3208642560. 2025-12-04T12:08:44.5327259Z [rank3]:E1204 12:08:39.190000 432761 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5327453Z [rank3]:E1204 12:08:39.190000 432761 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5327769Z [rank3]:E1204 12:08:39.190000 432761 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_non_root_cuda 2025-12-04T12:08:44.5327881Z [rank3]:E1204 12:08:39.190000 432761 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5328090Z [rank3]:E1204 12:08:39.190000 432761 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5328252Z [rank3]:E1204 12:08:39.190000 432761 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 3 with exit code: 10 2025-12-04T12:08:44.5328291Z dist init r=3, world=4 2025-12-04T12:08:44.5328427Z [rank1]:E1204 12:08:39.261000 432759 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5328585Z [rank1]:E1204 12:08:39.261000 432759 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5328869Z [rank1]:E1204 12:08:39.261000 432759 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5329042Z [rank1]:E1204 12:08:39.261000 432759 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5329326Z [rank1]:E1204 12:08:39.261000 432759 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5329449Z [rank1]:E1204 12:08:39.261000 432759 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5329724Z [rank1]:E1204 12:08:39.261000 432759 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5329870Z [rank1]:E1204 12:08:39.261000 432759 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5330144Z [rank1]:E1204 12:08:39.261000 432759 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5330289Z [rank1]:E1204 12:08:39.261000 432759 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5330572Z [rank1]:E1204 12:08:39.261000 432759 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5330768Z [rank1]:E1204 12:08:39.261000 432759 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5331043Z [rank1]:E1204 12:08:39.261000 432759 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5331191Z [rank1]:E1204 12:08:39.261000 432759 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5331620Z [rank1]:E1204 12:08:39.261000 432759 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_non_root_cuda! Caching allocator allocated memory was 512 and is now reported as 2560 on device 1. CUDA driver allocated memory was 2317352960 and is now 3275751424. 2025-12-04T12:08:44.5331733Z [rank1]:E1204 12:08:39.261000 432759 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5331927Z [rank1]:E1204 12:08:39.261000 432759 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5332244Z [rank1]:E1204 12:08:39.261000 432759 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_non_root_cuda 2025-12-04T12:08:44.5332354Z [rank1]:E1204 12:08:39.261000 432759 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5332563Z [rank1]:E1204 12:08:39.261000 432759 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5332723Z [rank1]:E1204 12:08:39.261000 432759 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 1 with exit code: 10 2025-12-04T12:08:44.5332763Z dist init r=1, world=4 2025-12-04T12:08:44.5332899Z [rank2]:E1204 12:08:39.317000 432760 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5333056Z [rank2]:E1204 12:08:39.317000 432760 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5333373Z [rank2]:E1204 12:08:39.317000 432760 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5333527Z [rank2]:E1204 12:08:39.317000 432760 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5333812Z [rank2]:E1204 12:08:39.317000 432760 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5333935Z [rank2]:E1204 12:08:39.317000 432760 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5334211Z [rank2]:E1204 12:08:39.317000 432760 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5334356Z [rank2]:E1204 12:08:39.317000 432760 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5334628Z [rank2]:E1204 12:08:39.317000 432760 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5334786Z [rank2]:E1204 12:08:39.317000 432760 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5335074Z [rank2]:E1204 12:08:39.317000 432760 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5335209Z [rank2]:E1204 12:08:39.317000 432760 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5335484Z [rank2]:E1204 12:08:39.317000 432760 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5335630Z [rank2]:E1204 12:08:39.317000 432760 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5336054Z [rank2]:E1204 12:08:39.317000 432760 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_non_root_cuda! Caching allocator allocated memory was 512 and is now reported as 2560 on device 2. CUDA driver allocated memory was 2300575744 and is now 3258974208. 2025-12-04T12:08:44.5336168Z [rank2]:E1204 12:08:39.317000 432760 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5336361Z [rank2]:E1204 12:08:39.317000 432760 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5336676Z [rank2]:E1204 12:08:39.317000 432760 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_non_root_cuda 2025-12-04T12:08:44.5336789Z [rank2]:E1204 12:08:39.317000 432760 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5336998Z [rank2]:E1204 12:08:39.317000 432760 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5337159Z [rank2]:E1204 12:08:39.317000 432760 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 2 with exit code: 10 2025-12-04T12:08:44.5337196Z dist init r=2, world=4 2025-12-04T12:08:44.5337356Z [rank0]:E1204 12:08:39.326000 432758 site-packages/torch/testing/_internal/common_distributed.py:935] Caught exception: 2025-12-04T12:08:44.5337512Z [rank0]:E1204 12:08:39.326000 432758 site-packages/torch/testing/_internal/common_distributed.py:935] Traceback (most recent call last): 2025-12-04T12:08:44.5337797Z [rank0]:E1204 12:08:39.326000 432758 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5337948Z [rank0]:E1204 12:08:39.326000 432758 site-packages/torch/testing/_internal/common_distributed.py:935] getattr(self, test_name)() 2025-12-04T12:08:44.5338234Z [rank0]:E1204 12:08:39.326000 432758 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5338358Z [rank0]:E1204 12:08:39.326000 432758 site-packages/torch/testing/_internal/common_distributed.py:935] fn() 2025-12-04T12:08:44.5338629Z [rank0]:E1204 12:08:39.326000 432758 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5338788Z [rank0]:E1204 12:08:39.326000 432758 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5339060Z [rank0]:E1204 12:08:39.326000 432758 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5339218Z [rank0]:E1204 12:08:39.326000 432758 site-packages/torch/testing/_internal/common_distributed.py:935] method(*args, **kwargs) 2025-12-04T12:08:44.5339490Z [rank0]:E1204 12:08:39.326000 432758 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5339624Z [rank0]:E1204 12:08:39.326000 432758 site-packages/torch/testing/_internal/common_distributed.py:935] with policy(): 2025-12-04T12:08:44.5339898Z [rank0]:E1204 12:08:39.326000 432758 site-packages/torch/testing/_internal/common_distributed.py:935] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5340045Z [rank0]:E1204 12:08:39.326000 432758 site-packages/torch/testing/_internal/common_distributed.py:935] raise RuntimeError(msg) 2025-12-04T12:08:44.5340471Z [rank0]:E1204 12:08:39.326000 432758 site-packages/torch/testing/_internal/common_distributed.py:935] RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_non_root_cuda! Caching allocator allocated memory was 512 and is now reported as 2560 on device 0. CUDA driver allocated memory was 2459959296 and is now 3418357760. 2025-12-04T12:08:44.5340584Z [rank0]:E1204 12:08:39.326000 432758 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5340820Z [rank0]:E1204 12:08:39.326000 432758 site-packages/torch/testing/_internal/common_distributed.py:935] To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5341137Z [rank0]:E1204 12:08:39.326000 432758 site-packages/torch/testing/_internal/common_distributed.py:935] PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_non_root_cuda 2025-12-04T12:08:44.5341250Z [rank0]:E1204 12:08:39.326000 432758 site-packages/torch/testing/_internal/common_distributed.py:935] 2025-12-04T12:08:44.5341460Z [rank0]:E1204 12:08:39.326000 432758 site-packages/torch/testing/_internal/common_distributed.py:935] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5341653Z [rank0]:E1204 12:08:39.326000 432758 site-packages/torch/testing/_internal/common_distributed.py:935] exiting process 0 with exit code: 10 2025-12-04T12:08:44.5341691Z dist init r=0, world=4 2025-12-04T12:08:44.5342022Z [rank0]:[W1204 12:08:39.545610514 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) 2025-12-04T12:08:44.5342064Z FAILED [11.8259s] [100%] 2025-12-04T12:08:44.5342068Z 2025-12-04T12:08:44.5342121Z =================================== FAILURES =================================== 2025-12-04T12:08:44.5342209Z ___________________ TestClipGradNormCUDA.test_non_root_cuda ____________________ 2025-12-04T12:08:44.5342253Z Traceback (most recent call last): 2025-12-04T12:08:44.5342415Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 770, in wrapper 2025-12-04T12:08:44.5342460Z self._join_processes(fn) 2025-12-04T12:08:44.5342630Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1039, in _join_processes 2025-12-04T12:08:44.5342683Z self._check_return_codes(fn, elapsed_time) 2025-12-04T12:08:44.5342874Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1079, in _check_return_codes 2025-12-04T12:08:44.5342916Z raise RuntimeError(error) 2025-12-04T12:08:44.5343008Z RuntimeError: Process 3 exited with error code 10 and exception: 2025-12-04T12:08:44.5343053Z Traceback (most recent call last): 2025-12-04T12:08:44.5343211Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5343254Z getattr(self, test_name)() 2025-12-04T12:08:44.5343411Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5343446Z fn() 2025-12-04T12:08:44.5343595Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5343635Z method(*args, **kwargs) 2025-12-04T12:08:44.5343784Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5343824Z method(*args, **kwargs) 2025-12-04T12:08:44.5343972Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5344009Z with policy(): 2025-12-04T12:08:44.5344159Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5344200Z raise RuntimeError(msg) 2025-12-04T12:08:44.5344501Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_non_root_cuda! Caching allocator allocated memory was 512 and is now reported as 2560 on device 3. CUDA driver allocated memory was 2250244096 and is now 3208642560. 2025-12-04T12:08:44.5344504Z 2025-12-04T12:08:44.5344579Z To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5344772Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_non_root_cuda 2025-12-04T12:08:44.5344776Z 2025-12-04T12:08:44.5344863Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5344865Z 2025-12-04T12:08:44.5344867Z 2025-12-04T12:08:44.5344942Z ----------------------------- Captured stdout call ----------------------------- 2025-12-04T12:08:44.5345027Z Process 3 terminated with exit code 10, terminating remaining processes. 2025-12-04T12:08:44.5345309Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_clip_grad_norm/distributed.fsdp.test_fsdp_clip_grad_norm-ce48c1e6fdab5aa9.xml - 2025-12-04T12:08:44.5345369Z =========================== short test summary info ============================ 2025-12-04T12:08:44.5345579Z FAILED [11.8259s] distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_non_root_cuda - RuntimeError: Process 3 exited with error code 10 and exception: 2025-12-04T12:08:44.5345624Z Traceback (most recent call last): 2025-12-04T12:08:44.5345787Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 925, in run_test 2025-12-04T12:08:44.5345830Z getattr(self, test_name)() 2025-12-04T12:08:44.5345987Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 772, in wrapper 2025-12-04T12:08:44.5346021Z fn() 2025-12-04T12:08:44.5346172Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5346213Z method(*args, **kwargs) 2025-12-04T12:08:44.5346362Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3329, in wrapper 2025-12-04T12:08:44.5346402Z method(*args, **kwargs) 2025-12-04T12:08:44.5346560Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3328, in wrapper 2025-12-04T12:08:44.5346596Z with policy(): 2025-12-04T12:08:44.5346747Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2705, in __exit__ 2025-12-04T12:08:44.5346808Z raise RuntimeError(msg) 2025-12-04T12:08:44.5347108Z RuntimeError: CUDA driver API confirmed a leak in __mp_main__.TestClipGradNormCUDA.test_non_root_cuda! Caching allocator allocated memory was 512 and is now reported as 2560 on device 3. CUDA driver allocated memory was 2250244096 and is now 3208642560. 2025-12-04T12:08:44.5347111Z 2025-12-04T12:08:44.5347187Z To execute this test, run the following from the base repo dir: 2025-12-04T12:08:44.5347378Z PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/distributed/fsdp/test_fsdp_clip_grad_norm.py TestClipGradNormCUDA.test_non_root_cuda 2025-12-04T12:08:44.5347381Z 2025-12-04T12:08:44.5347468Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2025-12-04T12:08:44.5347530Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2025-12-04T12:08:44.5347594Z ======================= 1 failed, 3 deselected in 11.84s ======================= 2025-12-04T12:08:44.5347630Z Got exit code 1 2025-12-04T12:08:44.5347777Z FAILED CONSISTENTLY: test/distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_non_root_cuda 2025-12-04T12:08:44.5347903Z Test failed consistently, continuing with the rest of the tests due to continue-through-error being set 2025-12-04T12:08:44.5348118Z Test results will be stored in test-reports/python-pytest/distributed.fsdp.test_fsdp_clip_grad_norm/distributed.fsdp.test_fsdp_clip_grad_norm-44571bc32f89dd19.xml 2025-12-04T12:08:44.5348176Z ============================= test session starts ============================== 2025-12-04T12:08:44.5348288Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.10/bin/python 2025-12-04T12:08:44.5348330Z cachedir: .pytest_cache 2025-12-04T12:08:44.5348484Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2025-12-04T12:08:44.5348533Z rootdir: /var/lib/jenkins/pytorch 2025-12-04T12:08:44.5348572Z configfile: pytest.ini 2025-12-04T12:08:44.5348733Z plugins: hypothesis-6.56.4, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, subtests-0.13.1, xdist-3.3.1, xdoctest-1.3.0, typeguard-4.3.0 2025-12-04T12:08:44.5348802Z collecting ... collected 4 items / 4 deselected / 0 selected 2025-12-04T12:08:44.5348855Z stepcurrent: skipping 4 already run items. 2025-12-04T12:08:44.5348923Z Running 0 items in this shard 2025-12-04T12:08:44.5348925Z 2025-12-04T12:08:44.5349181Z - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_clip_grad_norm/distributed.fsdp.test_fsdp_clip_grad_norm-44571bc32f89dd19.xml - 2025-12-04T12:08:44.5349241Z ============================ 4 deselected in 0.00s ============================= 2025-12-04T12:08:44.5349807Z The following tests failed consistently: ['test/distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_ddp_parity_cuda', 'test/distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_low_precision_grads_cuda', 'test/distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_no_gradients_cuda', 'test/distributed/fsdp/test_fsdp_clip_grad_norm.py::TestClipGradNormCUDA::test_non_root_cuda'] 2025-12-04T12:08:44.5349811Z 2025-12-04T12:08:44.5350014Z FINISHED PRINTING LOG FILE of distributed/fsdp/test_fsdp_clip_grad_norm 1/1 (test/test-reports/distributed.fsdp.test_fsdp_clip_grad_norm_1.1_2ac95aece383090e_.log) 2025-12-04T12:08:44.5350016Z 2025-12-04T12:08:44.5350148Z Finished distributed/fsdp/test_fsdp_clip_grad_norm 1/1 ... [2025-12-04 12:08:44.440425][4975153.290356344], took 3.08min 2025-12-04T12:08:44.5350413Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T12:08:44.5350511Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T12:08:44.5350671Z GITHUB_RUN_ID, GITHUB_RUN_ATTEMPT, or ARTIFACTS_FILE_SUFFIX not set, not uploading 2025-12-04T12:08:44.5350719Z Uploading artifacts took 0.00 seconds 2025-12-04T12:08:44.5350787Z distributed/fsdp/test_fsdp_clip_grad_norm 1/1 failed! 2025-12-04T12:08:44.5350892Z Running distributed/tensor/test_utils 1/1 ... [2025-12-04 12:08:44.448209][4975153.298143575] 2025-12-04T12:08:44.5350939Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T12:08:44.5351246Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/tensor/test_utils.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 12:08:44.448674] 2025-12-04T12:10:00.3597680Z 2025-12-04T12:10:00.3599008Z distributed/tensor/test_utils 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.tensor.test_utils_1.1_7f02c58e6610a1bd_.log 2025-12-04T12:10:00.3612080Z Running 24 items in this shard: test/distributed/tensor/test_utils.py::LocalTest::test_compute_local_shape_and_global_offset_uneven, test/distributed/tensor/test_utils.py::UtilTest::test_compute_global_tensor_shape_1D, test/distributed/tensor/test_utils.py::UtilTest::test_compute_global_tensor_shape_1D_invalid_shape, test/distributed/tensor/test_utils.py::UtilTest::test_compute_global_tensor_shape_failure_2D, test/distributed/tensor/test_utils.py::UtilTest::test_compute_local_shape_and_global_offset_1D, test/distributed/tensor/test_utils.py::UtilTest::test_compute_local_shape_and_global_offset_2D, test/distributed/tensor/test_utils.py::UtilTest::test_compute_local_shape_and_global_offset_3D, test/distributed/tensor/test_utils.py::UtilTest::test_compute_local_shape_and_global_offset_4D, test/distributed/tensor/test_utils.py::UtilTest::test_fsdp_tp_meta_compute, test/distributed/tensor/test_utils.py::UtilTest::test_hsdp_tp_meta_compute, test/distributed/tensor/test_utils.py::UtilTest::test_uneven_fsdp_tp_meta_compute, test/distributed/tensor/test_utils.py::UtilSingleDeviceTest::test_compute_global_tensor_info_non_shard_placements, test/distributed/tensor/test_utils.py::UtilSingleDeviceTest::test_compute_global_tensor_info_shard_placement, test/distributed/tensor/test_utils.py::UtilSingleDeviceTest::test_compute_global_tensor_info_unsupported_placement, test/distributed/tensor/test_utils.py::UtilSingleDeviceTest::test_compute_tensor_info, test/distributed/tensor/test_utils.py::TestStridedSharding::test_1d_mesh_strided_sharding, test/distributed/tensor/test_utils.py::TestStridedSharding::test_2d_mesh_2d_tensor_strided_sharding, test/distributed/tensor/test_utils.py::TestStridedSharding::test_2d_mesh_strided_sharding, test/distributed/tensor/test_utils.py::TestStridedSharding::test_2d_mesh_uneven_strided_shard, test/distributed/tensor/test_utils.py::Test_StridedShard_with_shard_order::test_StridedShard_not_convertible_to_shard_order, test/distributed/tensor/test_utils.py::Test_StridedShard_with_shard_order::test_StridedShard_to_shard_order, test/distributed/tensor/test_utils.py::Test2DStridedLocalShard::test_fsdp1_tp_2d_dtensor_local_shards_and_offsets, test/distributed/tensor/test_utils.py::Test2DStridedLocalShard::test_fsdp2_tp_2d_dtensor_local_shards_and_offsets, test/distributed/tensor/test_utils.py::TestExplicitRedistribute::test_explicit_matmul 2025-12-04T12:10:00.3623046Z 2025-12-04T12:10:00.3623461Z Finished distributed/tensor/test_utils 1/1 ... [2025-12-04 12:10:00.359729][4975229.209659196], took 1.27min 2025-12-04T12:10:00.3645579Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T12:10:00.3673797Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T12:10:00.3680090Z Running distributed/test_data_parallel 1/1 ... [2025-12-04 12:10:00.367814][4975229.217746904] 2025-12-04T12:10:00.3680829Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T12:10:00.3685329Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/test_data_parallel.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 12:10:00.368307] 2025-12-04T12:10:30.1890878Z 2025-12-04T12:10:30.1892428Z distributed/test_data_parallel 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.test_data_parallel_1.1_1e6ff4ab1c031f82_.log 2025-12-04T12:10:30.1915101Z Running 46 items in this shard: test/distributed/test_data_parallel.py::TestDataParallel::test_autocast, test/distributed/test_data_parallel.py::TestDataParallel::test_data_parallel, test/distributed/test_data_parallel.py::TestDataParallel::test_data_parallel_buffers_requiring_grad, test/distributed/test_data_parallel.py::TestDataParallel::test_data_parallel_complex, test/distributed/test_data_parallel.py::TestDataParallel::test_data_parallel_device_args, test/distributed/test_data_parallel.py::TestDataParallel::test_data_parallel_function_deletion, test/distributed/test_data_parallel.py::TestDataParallel::test_data_parallel_lazy_linear, test/distributed/test_data_parallel.py::TestDataParallel::test_data_parallel_model_device, test/distributed/test_data_parallel.py::TestDataParallel::test_data_parallel_model_no_refcycles, test/distributed/test_data_parallel.py::TestDataParallel::test_data_parallel_module_zero_inputs, test/distributed/test_data_parallel.py::TestDataParallel::test_data_parallel_multiple_input, test/distributed/test_data_parallel.py::TestDataParallel::test_data_parallel_nested_input, test/distributed/test_data_parallel.py::TestDataParallel::test_data_parallel_nested_output, test/distributed/test_data_parallel.py::TestDataParallel::test_data_parallel_no_grad, test/distributed/test_data_parallel.py::TestDataParallel::test_data_parallel_rnn, test/distributed/test_data_parallel.py::TestDataParallel::test_data_parallel_small_back, test/distributed/test_data_parallel.py::TestDataParallel::test_data_parallel_sparse, test/distributed/test_data_parallel.py::TestDataParallel::test_gather_cpu, test/distributed/test_data_parallel.py::TestDataParallel::test_gather_different_len_dicts, test/distributed/test_data_parallel.py::TestDataParallel::test_gather_gpu, test/distributed/test_data_parallel.py::TestDataParallel::test_parallel_apply, test/distributed/test_data_parallel.py::TestDataParallel::test_parallel_apply_autocast, test/distributed/test_data_parallel.py::TestDataParallel::test_parallel_apply_passes_exception, test/distributed/test_data_parallel.py::TestDataParallel::test_parameter_list_dict_replica, test/distributed/test_data_parallel.py::TestDataParallel::test_replicate, test/distributed/test_data_parallel.py::TestDataParallel::test_replicate_buffers, test/distributed/test_data_parallel.py::TestDataParallel::test_save_replica_module, test/distributed/test_data_parallel.py::TestDataParallel::test_scatter_cpu, test/distributed/test_data_parallel.py::TestDataParallel::test_scatter_gpu, test/distributed/test_data_parallel.py::TestDataParallel::test_strided_grad_layout, test/distributed/test_data_parallel.py::TestDataParallel::test_zero_grad, test/distributed/test_data_parallel.py::TestDataParallelDeviceTypeCUDA::test_data_parallel_module_cuda_float16, test/distributed/test_data_parallel.py::TestDataParallelDeviceTypeCUDA::test_data_parallel_module_cuda_float32, test/distributed/test_data_parallel.py::TestDataParallelDeviceTypeCUDA::test_data_parallel_module_cuda_float64, test/distributed/test_data_parallel.py::TestDataParallelDeviceTypeCUDA::test_data_parallel_module_kwargs_only_cuda_float16, test/distributed/test_data_parallel.py::TestDataParallelDeviceTypeCUDA::test_data_parallel_module_kwargs_only_cuda_float32, test/distributed/test_data_parallel.py::TestDataParallelDeviceTypeCUDA::test_data_parallel_module_kwargs_only_cuda_float64, test/distributed/test_data_parallel.py::TestDataParallelDeviceTypeCUDA::test_data_parallel_module_kwargs_only_empty_dict_cuda_float16, test/distributed/test_data_parallel.py::TestDataParallelDeviceTypeCUDA::test_data_parallel_module_kwargs_only_empty_dict_cuda_float32, test/distributed/test_data_parallel.py::TestDataParallelDeviceTypeCUDA::test_data_parallel_module_kwargs_only_empty_dict_cuda_float64, test/distributed/test_data_parallel.py::TestDataParallelDeviceTypeCUDA::test_data_parallel_module_kwargs_only_empty_list_cuda_float16, test/distributed/test_data_parallel.py::TestDataParallelDeviceTypeCUDA::test_data_parallel_module_kwargs_only_empty_list_cuda_float32, test/distributed/test_data_parallel.py::TestDataParallelDeviceTypeCUDA::test_data_parallel_module_kwargs_only_empty_list_cuda_float64, test/distributed/test_data_parallel.py::TestDataParallelDeviceTypeCUDA::test_data_parallel_module_kwargs_only_empty_tuple_cuda_float16, test/distributed/test_data_parallel.py::TestDataParallelDeviceTypeCUDA::test_data_parallel_module_kwargs_only_empty_tuple_cuda_float32, test/distributed/test_data_parallel.py::TestDataParallelDeviceTypeCUDA::test_data_parallel_module_kwargs_only_empty_tuple_cuda_float64 2025-12-04T12:10:30.1936142Z 2025-12-04T12:10:30.1936563Z Finished distributed/test_data_parallel 1/1 ... [2025-12-04 12:10:30.188654][4975259.038584157], took 0.50min 2025-12-04T12:10:30.1937992Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T12:10:30.1963030Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T12:10:30.1969436Z Running distributed/_composable/fsdp/test_fully_shard_memory 1/1 ... [2025-12-04 12:10:30.196731][4975259.046663755] 2025-12-04T12:10:30.1970215Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T12:10:30.1974566Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/_composable/fsdp/test_fully_shard_memory.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 12:10:30.197222] 2025-12-04T12:10:52.9321878Z 2025-12-04T12:10:52.9323359Z distributed/_composable/fsdp/test_fully_shard_memory 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed._composable.fsdp.test_fully_shard_memory_1.1_d4a8f70e08cca707_.log 2025-12-04T12:10:52.9326341Z Running 2 items in this shard: test/distributed/_composable/fsdp/test_fully_shard_memory.py::TestFullyShardMemory::test_fully_shard_del_memory, test/distributed/_composable/fsdp/test_fully_shard_memory.py::TestFullyShardMemory::test_fully_shard_training_memory 2025-12-04T12:10:52.9327622Z 2025-12-04T12:10:52.9328178Z Finished distributed/_composable/fsdp/test_fully_shard_memory 1/1 ... [2025-12-04 12:10:52.931739][4975281.781671637], took 0.38min 2025-12-04T12:10:52.9367163Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T12:10:52.9392910Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T12:10:52.9399117Z Running distributed/optim/test_zero_redundancy_optimizer 1/1 ... [2025-12-04 12:10:52.939657][4975281.789591078] 2025-12-04T12:10:52.9399902Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T12:10:52.9403725Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/optim/test_zero_redundancy_optimizer.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 12:10:52.940136] 2025-12-04T12:17:00.5304712Z 2025-12-04T12:17:00.5306675Z distributed/optim/test_zero_redundancy_optimizer 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.optim.test_zero_redundancy_optimizer_1.1_f3047518ad2f532e_.log 2025-12-04T12:17:00.5341409Z Running 42 items in this shard: test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerSingleRank::test_constructor, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerSingleRank::test_lr_scheduler, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerSingleRank::test_same_dense_param_type, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerSingleRank::test_state_dict, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerSingleRank::test_step_with_extra_inner_key, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerSingleRank::test_step_with_kwargs, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerSingleRank::test_step_without_closure, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerSingleRank::test_zero_grad, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_add_param_group, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_collect_shards, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_ddp_zero_overlap_use_gpu_True_use_interleaved_hook_False_gradient_as_bucket_view_False_static_graph_False_shard_buckets_False, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_ddp_zero_overlap_use_gpu_True_use_interleaved_hook_False_gradient_as_bucket_view_False_static_graph_False_shard_buckets_True, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_ddp_zero_overlap_use_gpu_True_use_interleaved_hook_False_gradient_as_bucket_view_False_static_graph_True_shard_buckets_False, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_ddp_zero_overlap_use_gpu_True_use_interleaved_hook_False_gradient_as_bucket_view_False_static_graph_True_shard_buckets_True, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_ddp_zero_overlap_use_gpu_True_use_interleaved_hook_False_gradient_as_bucket_view_True_static_graph_False_shard_buckets_False, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_ddp_zero_overlap_use_gpu_True_use_interleaved_hook_False_gradient_as_bucket_view_True_static_graph_False_shard_buckets_True, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_ddp_zero_overlap_use_gpu_True_use_interleaved_hook_False_gradient_as_bucket_view_True_static_graph_True_shard_buckets_False, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_ddp_zero_overlap_use_gpu_True_use_interleaved_hook_False_gradient_as_bucket_view_True_static_graph_True_shard_buckets_True, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_ddp_zero_overlap_use_gpu_True_use_interleaved_hook_True_gradient_as_bucket_view_False_static_graph_False_shard_buckets_False, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_ddp_zero_overlap_use_gpu_True_use_interleaved_hook_True_gradient_as_bucket_view_False_static_graph_False_shard_buckets_True, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_ddp_zero_overlap_use_gpu_True_use_interleaved_hook_True_gradient_as_bucket_view_False_static_graph_True_shard_buckets_False, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_ddp_zero_overlap_use_gpu_True_use_interleaved_hook_True_gradient_as_bucket_view_False_static_graph_True_shard_buckets_True, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_ddp_zero_overlap_use_gpu_True_use_interleaved_hook_True_gradient_as_bucket_view_True_static_graph_False_shard_buckets_False, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_ddp_zero_overlap_use_gpu_True_use_interleaved_hook_True_gradient_as_bucket_view_True_static_graph_False_shard_buckets_True, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_ddp_zero_overlap_use_gpu_True_use_interleaved_hook_True_gradient_as_bucket_view_True_static_graph_True_shard_buckets_False, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_ddp_zero_overlap_use_gpu_True_use_interleaved_hook_True_gradient_as_bucket_view_True_static_graph_True_shard_buckets_True, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_local_optimizer_parity_optimizer_class_str_AdamW_maximize_False, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_local_optimizer_parity_optimizer_class_str_AdamW_maximize_True, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_local_optimizer_parity_optimizer_class_str_Adam_maximize_False, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_local_optimizer_parity_optimizer_class_str_Adam_maximize_True, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_local_optimizer_parity_optimizer_class_str_SGD_maximize_False, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_local_optimizer_parity_optimizer_class_str_SGD_maximize_True, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_lr_scheduler, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_multiple_param_groups, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_nondefault_process_group, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_sharding, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_step, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_step_with_closure, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_zero_join_cpu, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_zero_join_gpu, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_zero_model_parallel_parameters_as_bucket_view_False, test/distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_zero_model_parallel_parameters_as_bucket_view_True 2025-12-04T12:17:00.5374684Z 2025-12-04T12:17:00.5375209Z Finished distributed/optim/test_zero_redundancy_optimizer 1/1 ... [2025-12-04 12:17:00.530860][4975649.380791162], took 6.13min 2025-12-04T12:17:00.5376713Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T12:17:00.5381878Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T12:17:00.5388628Z Running distributed/test_c10d_spawn_gloo 1/1 ... [2025-12-04 12:17:00.538646][4975649.388579456] 2025-12-04T12:17:00.5389271Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T12:17:00.5393480Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/test_c10d_spawn_gloo.py', '--shard-id=1', '--num-shards=1', '-v', '--subprocess', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 12:17:00.539111] 2025-12-04T12:18:02.5693366Z 2025-12-04T12:18:02.5694685Z distributed/test_c10d_spawn_gloo 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.test_c10d_spawn_gloo_1.1_121edd22744fbad5_.log 2025-12-04T12:18:02.5700451Z Running 11 items in this shard: test/distributed/test_c10d_spawn_gloo.py::DistributedDataParallelSingleProcessTest::test_cpu, test/distributed/test_c10d_spawn_gloo.py::DistributedDataParallelSingleProcessTest::test_cuda, test/distributed/test_c10d_spawn_gloo.py::DistributedDataParallelSingleProcessTest::test_rnn, test/distributed/test_c10d_spawn_gloo.py::TestDistributedNNFunctionsGloo::test_all_gather, test/distributed/test_c10d_spawn_gloo.py::TestDistributedNNFunctionsGloo::test_all_to_all, test/distributed/test_c10d_spawn_gloo.py::TestDistributedNNFunctionsGloo::test_all_to_all_single, test/distributed/test_c10d_spawn_gloo.py::TestDistributedNNFunctionsGloo::test_allreduce, test/distributed/test_c10d_spawn_gloo.py::TestDistributedNNFunctionsGloo::test_broadcast, test/distributed/test_c10d_spawn_gloo.py::TestDistributedNNFunctionsGloo::test_gather, test/distributed/test_c10d_spawn_gloo.py::TestDistributedNNFunctionsGloo::test_reduce, test/distributed/test_c10d_spawn_gloo.py::TestDistributedNNFunctionsGloo::test_scatter 2025-12-04T12:18:02.5706608Z Running 1 items in this shard: test/distributed/test_c10d_spawn_gloo.py::DistributedDataParallelSingleProcessTest::test_cpu 2025-12-04T12:18:02.5707773Z Running 1 items in this shard: test/distributed/test_c10d_spawn_gloo.py::DistributedDataParallelSingleProcessTest::test_cuda 2025-12-04T12:18:02.5708905Z Running 1 items in this shard: test/distributed/test_c10d_spawn_gloo.py::DistributedDataParallelSingleProcessTest::test_rnn 2025-12-04T12:18:02.5710003Z Running 1 items in this shard: test/distributed/test_c10d_spawn_gloo.py::TestDistributedNNFunctionsGloo::test_all_gather 2025-12-04T12:18:02.5711141Z Running 1 items in this shard: test/distributed/test_c10d_spawn_gloo.py::TestDistributedNNFunctionsGloo::test_all_to_all 2025-12-04T12:18:02.5712258Z Running 1 items in this shard: test/distributed/test_c10d_spawn_gloo.py::TestDistributedNNFunctionsGloo::test_all_to_all_single 2025-12-04T12:18:02.5713363Z Running 1 items in this shard: test/distributed/test_c10d_spawn_gloo.py::TestDistributedNNFunctionsGloo::test_allreduce 2025-12-04T12:18:02.5714461Z Running 1 items in this shard: test/distributed/test_c10d_spawn_gloo.py::TestDistributedNNFunctionsGloo::test_broadcast 2025-12-04T12:18:02.5716376Z Running 1 items in this shard: test/distributed/test_c10d_spawn_gloo.py::TestDistributedNNFunctionsGloo::test_gather 2025-12-04T12:18:02.5717474Z Running 1 items in this shard: test/distributed/test_c10d_spawn_gloo.py::TestDistributedNNFunctionsGloo::test_reduce 2025-12-04T12:18:02.5718517Z Running 1 items in this shard: test/distributed/test_c10d_spawn_gloo.py::TestDistributedNNFunctionsGloo::test_scatter 2025-12-04T12:18:02.5719111Z 2025-12-04T12:18:02.5719518Z Finished distributed/test_c10d_spawn_gloo 1/1 ... [2025-12-04 12:18:02.569214][4975711.419145685], took 1.03min 2025-12-04T12:18:02.5741943Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T12:18:02.5768121Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T12:18:02.5774651Z Running distributed/fsdp/test_distributed_checkpoint 1/1 ... [2025-12-04 12:18:02.577254][4975711.427185446] 2025-12-04T12:18:02.5775367Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T12:18:02.5780154Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/fsdp/test_distributed_checkpoint.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 12:18:02.577794] 2025-12-04T12:18:04.9975232Z 2025-12-04T12:18:04.9976677Z distributed/fsdp/test_distributed_checkpoint 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.fsdp.test_distributed_checkpoint_1.1_269a771c7a438364_.log 2025-12-04T12:18:04.9980307Z Running 2 items in this shard: test/distributed/fsdp/test_distributed_checkpoint.py::TestDistributedCheckpointCUDA::test_distributed_checkpoint_state_dict_type0_cuda, test/distributed/fsdp/test_distributed_checkpoint.py::TestDistributedCheckpointCUDA::test_distributed_checkpoint_state_dict_type1_cuda 2025-12-04T12:18:04.9982060Z 2025-12-04T12:18:04.9982584Z Finished distributed/fsdp/test_distributed_checkpoint 1/1 ... [2025-12-04 12:18:04.997082][4975713.846993236], took 0.04min 2025-12-04T12:18:05.0021110Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T12:18:05.0046739Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T12:18:05.0052832Z Running distributed/test_c10d_spawn_nccl 1/1 ... [2025-12-04 12:18:05.004982][4975713.854915328] 2025-12-04T12:18:05.0053546Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T12:18:05.0056764Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/test_c10d_spawn_nccl.py', '--shard-id=1', '--num-shards=1', '-v', '--subprocess', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 12:18:05.005437] 2025-12-04T12:19:27.8681099Z 2025-12-04T12:19:27.8682246Z distributed/test_c10d_spawn_nccl 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.test_c10d_spawn_nccl_1.1_6195555737aef301_.log 2025-12-04T12:19:27.8687428Z Running 10 items in this shard: test/distributed/test_c10d_spawn_nccl.py::TestDistributedNNFunctionsNccl::test_all_gather, test/distributed/test_c10d_spawn_nccl.py::TestDistributedNNFunctionsNccl::test_all_gather_base, test/distributed/test_c10d_spawn_nccl.py::TestDistributedNNFunctionsNccl::test_all_reduce_non_contiguous, test/distributed/test_c10d_spawn_nccl.py::TestDistributedNNFunctionsNccl::test_all_to_all, test/distributed/test_c10d_spawn_nccl.py::TestDistributedNNFunctionsNccl::test_all_to_all_single, test/distributed/test_c10d_spawn_nccl.py::TestDistributedNNFunctionsNccl::test_allreduce, test/distributed/test_c10d_spawn_nccl.py::TestDistributedNNFunctionsNccl::test_broadcast, test/distributed/test_c10d_spawn_nccl.py::TestDistributedNNFunctionsNccl::test_reduce, test/distributed/test_c10d_spawn_nccl.py::TestDistributedNNFunctionsNccl::test_reduce_scatter, test/distributed/test_c10d_spawn_nccl.py::TestDistributedNNFunctionsNccl::test_reduce_scatter_non_contiguous 2025-12-04T12:19:27.8692756Z Running 1 items in this shard: test/distributed/test_c10d_spawn_nccl.py::TestDistributedNNFunctionsNccl::test_all_gather 2025-12-04T12:19:27.8693896Z Running 1 items in this shard: test/distributed/test_c10d_spawn_nccl.py::TestDistributedNNFunctionsNccl::test_all_gather_base 2025-12-04T12:19:27.8695095Z Running 1 items in this shard: test/distributed/test_c10d_spawn_nccl.py::TestDistributedNNFunctionsNccl::test_all_reduce_non_contiguous 2025-12-04T12:19:27.8696266Z Running 1 items in this shard: test/distributed/test_c10d_spawn_nccl.py::TestDistributedNNFunctionsNccl::test_all_to_all 2025-12-04T12:19:27.8697388Z Running 1 items in this shard: test/distributed/test_c10d_spawn_nccl.py::TestDistributedNNFunctionsNccl::test_all_to_all_single 2025-12-04T12:19:27.8698515Z Running 1 items in this shard: test/distributed/test_c10d_spawn_nccl.py::TestDistributedNNFunctionsNccl::test_allreduce 2025-12-04T12:19:27.8699592Z Running 1 items in this shard: test/distributed/test_c10d_spawn_nccl.py::TestDistributedNNFunctionsNccl::test_broadcast 2025-12-04T12:19:27.8700689Z Running 1 items in this shard: test/distributed/test_c10d_spawn_nccl.py::TestDistributedNNFunctionsNccl::test_reduce 2025-12-04T12:19:27.8701890Z Running 1 items in this shard: test/distributed/test_c10d_spawn_nccl.py::TestDistributedNNFunctionsNccl::test_reduce_scatter 2025-12-04T12:19:27.8703196Z Running 1 items in this shard: test/distributed/test_c10d_spawn_nccl.py::TestDistributedNNFunctionsNccl::test_reduce_scatter_non_contiguous 2025-12-04T12:19:27.8703900Z 2025-12-04T12:19:27.8704330Z Finished distributed/test_c10d_spawn_nccl 1/1 ... [2025-12-04 12:19:27.867905][4975796.717836709], took 1.38min 2025-12-04T12:19:27.8729709Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T12:19:27.8755946Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T12:19:27.8761377Z Running distributed/fsdp/test_fsdp_use_orig_params 1/1 ... [2025-12-04 12:19:27.875858][4975796.725791491] 2025-12-04T12:19:27.8762163Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T12:19:27.8766585Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/fsdp/test_fsdp_use_orig_params.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 12:19:27.876322] 2025-12-04T12:26:07.0796363Z 2025-12-04T12:26:07.0798068Z distributed/fsdp/test_fsdp_use_orig_params 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.fsdp.test_fsdp_use_orig_params_1.1_a92b491321e480e0_.log 2025-12-04T12:26:07.0815514Z Running 25 items in this shard: test/distributed/fsdp/test_fsdp_use_orig_params.py::TestFSDPUseOrigParamsMultipleParamGroups::test_diff_hyperparams_cpu_offload_sharding_strategy_str_full_shard, test/distributed/fsdp/test_fsdp_use_orig_params.py::TestFSDPUseOrigParamsMultipleParamGroups::test_diff_hyperparams_cpu_offload_sharding_strategy_str_no_shard, test/distributed/fsdp/test_fsdp_use_orig_params.py::TestFSDPUseOrigParamsMultipleParamGroups::test_diff_hyperparams_cpu_offload_sharding_strategy_str_shard_grad_op, test/distributed/fsdp/test_fsdp_use_orig_params.py::TestFSDPUseOrigParamsMultipleParamGroups::test_diff_hyperparams_sharding_strategy_str_full_shard, test/distributed/fsdp/test_fsdp_use_orig_params.py::TestFSDPUseOrigParamsMultipleParamGroups::test_diff_hyperparams_sharding_strategy_str_no_shard, test/distributed/fsdp/test_fsdp_use_orig_params.py::TestFSDPUseOrigParamsMultipleParamGroups::test_diff_hyperparams_sharding_strategy_str_shard_grad_op, test/distributed/fsdp/test_fsdp_use_orig_params.py::TestFSDPUseOrigParamsMultipleParamGroups::test_diff_trainability, test/distributed/fsdp/test_fsdp_use_orig_params.py::TestFSDPUseOrigParamsMultipleParamGroups::test_fsdp_compile, test/distributed/fsdp/test_fsdp_use_orig_params.py::TestFSDPUseOrigParamsMultipleParamGroups::test_multiple_optimizers, test/distributed/fsdp/test_fsdp_use_orig_params.py::TestFSDPUseOrigParamsUnshardReshard::test_multiple_forward_offload_params_False, test/distributed/fsdp/test_fsdp_use_orig_params.py::TestFSDPUseOrigParamsUnshardReshard::test_multiple_forward_offload_params_True, test/distributed/fsdp/test_fsdp_use_orig_params.py::TestFSDPUseOrigParamsUnshardReshard::test_summon_between_two_forwards_offload_params_False, test/distributed/fsdp/test_fsdp_use_orig_params.py::TestFSDPUseOrigParamsUnshardReshard::test_summon_between_two_forwards_offload_params_True, test/distributed/fsdp/test_fsdp_use_orig_params.py::TestFSDPUseOrigParamsParamAccess::test_access_params_after_forward, test/distributed/fsdp/test_fsdp_use_orig_params.py::TestFSDPUseOrigParamsWriteback::test_grad_writeback, test/distributed/fsdp/test_fsdp_use_orig_params.py::TestFSDPUseOrigParamsWriteback::test_no_reshard_and_mixed_precision, test/distributed/fsdp/test_fsdp_use_orig_params.py::TestFSDPUseOrigParamsWriteback::test_param_writeback, test/distributed/fsdp/test_fsdp_use_orig_params.py::TestFSDPUseOrigParamsWriteback::test_writeback_between_fwd_and_bwd_for_no_reshard_raises, test/distributed/fsdp/test_fsdp_use_orig_params.py::TestFSDPUseOrigParamsWriteback::test_writeback_shape_mismatch, test/distributed/fsdp/test_fsdp_use_orig_params.py::TestFSDPUseOrigParamsFQNs::test_named_parameters_in_forward, test/distributed/fsdp/test_fsdp_use_orig_params.py::TestFSDPUseOrigParamsNoSync::test_no_sync_correctness, test/distributed/fsdp/test_fsdp_use_orig_params.py::TestFSDPUseOrigParamsNoSync::test_no_sync_mixed_precision, test/distributed/fsdp/test_fsdp_use_orig_params.py::TestFSDPUseOrigParamsInit::test_non_uniform_requires_grad, test/distributed/fsdp/test_fsdp_use_orig_params.py::TestMultiTensorApply::test_multi_tensor_apply_size0_tensors_cpu, test/distributed/fsdp/test_fsdp_use_orig_params.py::TestMultiTensorApply::test_multi_tensor_apply_size0_tensors_cuda 2025-12-04T12:26:07.0831780Z 2025-12-04T12:26:07.0832260Z Finished distributed/fsdp/test_fsdp_use_orig_params 1/1 ... [2025-12-04 12:26:07.079505][4976195.929432609], took 6.65min 2025-12-04T12:26:07.0848287Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T12:26:07.0877145Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T12:26:07.0884406Z Running distributed/_shard/sharded_tensor/test_sharded_tensor 1/1 ... [2025-12-04 12:26:07.088150][4976195.938078853] 2025-12-04T12:26:07.0885165Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T12:26:07.0889228Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/_shard/sharded_tensor/test_sharded_tensor.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 12:26:07.088678] 2025-12-04T12:31:45.8245607Z 2025-12-04T12:31:45.8247331Z distributed/_shard/sharded_tensor/test_sharded_tensor 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed._shard.sharded_tensor.test_sharded_tensor_1.1_4ee51a25fa529d5a_.log 2025-12-04T12:31:45.8291478Z Running 74 items in this shard: test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorMetadata::test_serialize_and_deserialize, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestCreateTensorFromParams::test_empty, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardParameter::test_shard_parameter, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardParameter::test_shard_parameter_errors, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardTensor::test_shard_tensor, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardTensor::test_shard_tensor_errors, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardTensor::test_shard_tensor_with_empty_shard, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestModuleHookApi::test_collect_local_shard, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestModuleHookApi::test_reshard_output, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestLocalTensor::test_local_tensor, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestLocalTensor::test_local_tensor_error, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorChunked::test_cleanup, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorChunked::test_complete_world_size, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorChunked::test_create_sharded_tensor_like, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorChunked::test_create_sharded_tensor_with_full, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorChunked::test_create_sharded_tensor_with_ones, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorChunked::test_create_sharded_tensor_with_rand, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorChunked::test_create_sharded_tensor_with_zeros, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorChunked::test_gather_even, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorChunked::test_gather_uneven, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorChunked::test_insufficient_sharding_dims, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorChunked::test_invalid_pg_rpc_ranks, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorChunked::test_invalid_sharding, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorChunked::test_load_state_dict_errors, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorChunked::test_multiple_local_shards, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorChunked::test_new_group, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorChunked::test_partial_world_size, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorChunked::test_sharded_tensor_metadata, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorChunked::test_sharded_tensor_sizes, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorChunked::test_sharding_columns, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorChunked::test_state_dict, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorChunked::test_state_dict_new_group, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorChunked::test_state_dict_no_sharded_tensors, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorEnumerable::test_create_sharded_tensor_with_ones, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorEnumerable::test_gather_even, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorEnumerable::test_gather_uneven, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorEnumerable::test_grid_sharding, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorEnumerable::test_multiple_local_shards, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorEnumerable::test_new_group, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorEnumerable::test_partial_world_size, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorEnumerable::test_sharded_tensor_device, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorEnumerable::test_sharded_tensor_metadata, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorEnumerable::test_sharded_tensor_to_cpu, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorEnumerable::test_sharded_tensor_to_cuda, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorEnumerable::test_sharded_tensor_to_test, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorEnumerable::test_uneven_shards, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorEnumerable::test_with_rpc_names, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorFromLocalTensor::test_init_from_local_tensor, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorFromLocalTensor::test_init_from_local_tensor_errors, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorFromLocalShards::test_init_from_local_shards, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorFromLocalShards::test_init_from_local_shards_and_global_metadata, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorFromLocalShards::test_init_from_local_shards_and_global_metadata_invalid_shards, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorFromLocalShards::test_init_from_local_shards_and_global_metadata_with_all_zeros, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorFromLocalShards::test_init_from_local_shards_and_global_metadata_with_local_view, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorFromLocalShards::test_init_from_local_shards_invalid_local_shards, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorFromLocalShards::test_init_from_local_shards_invalid_pin_memory, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorFromLocalShards::test_init_from_local_shards_invalid_property_cross_ranks, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorFromLocalShards::test_init_from_local_shards_invalid_shards_gaps, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorFromLocalShards::test_init_from_local_shards_invalid_shards_overlap, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorFromLocalShards::test_init_from_local_shards_new_group, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorFromLocalShards::test_init_from_local_shards_with_different_glb_size, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorFromLocalShards::test_local_shards, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorFromLocalShards::test_non_rw_sharded_recalc_for_metadata, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorFromLocalShards::test_recalc_for_metadata, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorFromLocalShards::test_st_base_init_from_local_shards_and_global_metadata, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorCustomOps::test_custom_op, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorCustomOps::test_custom_op_errors, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorCustomOps::test_custom_op_override, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardMetadata::test_create_shard_with_no_placement, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardMetadata::test_shard_metadata_init, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorSubGroupInit::test_sub_process_group_placement_validation, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestShardedTensorSubGroupInit::test_sub_process_group_sharded_tensor_init, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestCreateTensorNoProcessGroupMode::test_init_from_local_shards_and_global_metadata, test/distributed/_shard/sharded_tensor/test_sharded_tensor.py::TestCreateTensorNoProcessGroupMode::test_non_contiguous_local_shards 2025-12-04T12:31:45.8333524Z 2025-12-04T12:31:45.8334057Z Finished distributed/_shard/sharded_tensor/test_sharded_tensor 1/1 ... [2025-12-04 12:31:45.824615][4976534.674551172], took 5.65min 2025-12-04T12:31:45.8335587Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T12:31:45.8336872Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T12:31:45.8337598Z GITHUB_RUN_ID, GITHUB_RUN_ATTEMPT, or ARTIFACTS_FILE_SUFFIX not set, not uploading 2025-12-04T12:31:45.8338197Z Uploading artifacts took 0.00 seconds 2025-12-04T12:31:45.8338797Z Running distributed/test_launcher 1/1 ... [2025-12-04 12:31:45.829897][4976534.679830318] 2025-12-04T12:31:45.8339415Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T12:31:45.8340756Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/test_launcher.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 12:31:45.830363] 2025-12-04T12:31:49.0015133Z 2025-12-04T12:31:49.0016587Z distributed/test_launcher 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.test_launcher_1.1_f2be728605f6b7f6_.log 2025-12-04T12:31:49.0018012Z Running 1 items in this shard: test/distributed/test_launcher.py::TestDistributedLaunch::test_launch_user_script 2025-12-04T12:31:49.0018622Z 2025-12-04T12:31:49.0019103Z Finished distributed/test_launcher 1/1 ... [2025-12-04 12:31:49.001135][4976537.851066786], took 0.05min 2025-12-04T12:31:49.0061305Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T12:31:49.0086926Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T12:31:49.0092764Z Running distributed/test_store 1/1 ... [2025-12-04 12:31:49.009026][4976537.858960799] 2025-12-04T12:31:49.0093408Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T12:31:49.0097181Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/test_store.py', '--shard-id=1', '--num-shards=1', '-v', '--subprocess', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 12:31:49.009472] 2025-12-04T12:36:53.1925083Z 2025-12-04T12:36:53.1926537Z distributed/test_store 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.test_store_1.1_fa1c95c10806f7d2_.log 2025-12-04T12:36:53.1968464Z Running 126 items in this shard: test/distributed/test_store.py::FileStoreTest::test_append, test/distributed/test_store.py::FileStoreTest::test_clone, test/distributed/test_store.py::FileStoreTest::test_compare_set, test/distributed/test_store.py::FileStoreTest::test_init_pg_and_rpc_with_same_file, test/distributed/test_store.py::FileStoreTest::test_list_keys, test/distributed/test_store.py::FileStoreTest::test_multi_get, test/distributed/test_store.py::FileStoreTest::test_multi_set, test/distributed/test_store.py::FileStoreTest::test_queues, test/distributed/test_store.py::FileStoreTest::test_queues_bidirectional, test/distributed/test_store.py::FileStoreTest::test_queues_nonblocking, test/distributed/test_store.py::FileStoreTest::test_queues_timeout, test/distributed/test_store.py::FileStoreTest::test_refcount, test/distributed/test_store.py::FileStoreTest::test_set_get_check, test/distributed/test_store.py::FileStoreTest::test_simple_wait, test/distributed/test_store.py::HashStoreTest::test_append, test/distributed/test_store.py::HashStoreTest::test_clone, test/distributed/test_store.py::HashStoreTest::test_compare_set, test/distributed/test_store.py::HashStoreTest::test_list_keys, test/distributed/test_store.py::HashStoreTest::test_multi_get, test/distributed/test_store.py::HashStoreTest::test_multi_set, test/distributed/test_store.py::HashStoreTest::test_queues, test/distributed/test_store.py::HashStoreTest::test_queues_bidirectional, test/distributed/test_store.py::HashStoreTest::test_queues_nonblocking, test/distributed/test_store.py::HashStoreTest::test_queues_timeout, test/distributed/test_store.py::HashStoreTest::test_set_get_check, test/distributed/test_store.py::HashStoreTest::test_simple_wait, test/distributed/test_store.py::PrefixStoreTest::test_get_underlying_store, test/distributed/test_store.py::PrefixFileStoreTest::test_append, test/distributed/test_store.py::PrefixFileStoreTest::test_clone, test/distributed/test_store.py::PrefixFileStoreTest::test_compare_set, test/distributed/test_store.py::PrefixFileStoreTest::test_list_keys, test/distributed/test_store.py::PrefixFileStoreTest::test_multi_get, test/distributed/test_store.py::PrefixFileStoreTest::test_multi_set, test/distributed/test_store.py::PrefixFileStoreTest::test_queues, test/distributed/test_store.py::PrefixFileStoreTest::test_queues_bidirectional, test/distributed/test_store.py::PrefixFileStoreTest::test_queues_nonblocking, test/distributed/test_store.py::PrefixFileStoreTest::test_queues_timeout, test/distributed/test_store.py::PrefixFileStoreTest::test_set_get_check, test/distributed/test_store.py::PrefixFileStoreTest::test_simple_wait, test/distributed/test_store.py::TCPStoreTest::test_address_already_in_use, test/distributed/test_store.py::TCPStoreTest::test_agent_store, test/distributed/test_store.py::TCPStoreTest::test_append, test/distributed/test_store.py::TCPStoreTest::test_clone, test/distributed/test_store.py::TCPStoreTest::test_compare_set, test/distributed/test_store.py::TCPStoreTest::test_init_pg_and_rpc_with_same_socket, test/distributed/test_store.py::TCPStoreTest::test_list_keys, test/distributed/test_store.py::TCPStoreTest::test_multi_get, test/distributed/test_store.py::TCPStoreTest::test_multi_set, test/distributed/test_store.py::TCPStoreTest::test_multi_worker_with_fixed_world_size, test/distributed/test_store.py::TCPStoreTest::test_multi_worker_with_nonfixed_world_size, test/distributed/test_store.py::TCPStoreTest::test_multitenancy, test/distributed/test_store.py::TCPStoreTest::test_numkeys_delkeys, test/distributed/test_store.py::TCPStoreTest::test_queues, test/distributed/test_store.py::TCPStoreTest::test_queues_bidirectional, test/distributed/test_store.py::TCPStoreTest::test_queues_nonblocking, test/distributed/test_store.py::TCPStoreTest::test_queues_timeout, test/distributed/test_store.py::TCPStoreTest::test_repr, test/distributed/test_store.py::TCPStoreTest::test_set_get_check, test/distributed/test_store.py::TCPStoreTest::test_simple_wait, test/distributed/test_store.py::TCPStoreTest::test_store_timeout_on_missing_clients, test/distributed/test_store.py::TCPStoreTest::test_take_over_listen_socket, test/distributed/test_store.py::TCPStoreTest::test_world_size_0_raises, test/distributed/test_store.py::LibUvTCPStoreTest::test_address_already_in_use, test/distributed/test_store.py::LibUvTCPStoreTest::test_agent_store, test/distributed/test_store.py::LibUvTCPStoreTest::test_append, test/distributed/test_store.py::LibUvTCPStoreTest::test_clone, test/distributed/test_store.py::LibUvTCPStoreTest::test_compare_set, test/distributed/test_store.py::LibUvTCPStoreTest::test_init_pg_and_rpc_with_same_socket, test/distributed/test_store.py::LibUvTCPStoreTest::test_list_keys, test/distributed/test_store.py::LibUvTCPStoreTest::test_multi_get, test/distributed/test_store.py::LibUvTCPStoreTest::test_multi_set, test/distributed/test_store.py::LibUvTCPStoreTest::test_multi_worker_with_fixed_world_size, test/distributed/test_store.py::LibUvTCPStoreTest::test_multi_worker_with_nonfixed_world_size, test/distributed/test_store.py::LibUvTCPStoreTest::test_multitenancy, test/distributed/test_store.py::LibUvTCPStoreTest::test_numkeys_delkeys, test/distributed/test_store.py::LibUvTCPStoreTest::test_queues, test/distributed/test_store.py::LibUvTCPStoreTest::test_queues_bidirectional, test/distributed/test_store.py::LibUvTCPStoreTest::test_queues_nonblocking, test/distributed/test_store.py::LibUvTCPStoreTest::test_queues_timeout, test/distributed/test_store.py::LibUvTCPStoreTest::test_repr, test/distributed/test_store.py::LibUvTCPStoreTest::test_set_get_check, test/distributed/test_store.py::LibUvTCPStoreTest::test_simple_wait, test/distributed/test_store.py::LibUvTCPStoreTest::test_store_timeout_on_missing_clients, test/distributed/test_store.py::LibUvTCPStoreTest::test_take_over_listen_socket, test/distributed/test_store.py::LibUvTCPStoreTest::test_world_size_0_raises, test/distributed/test_store.py::PrefixTCPStoreTest::test_append, test/distributed/test_store.py::PrefixTCPStoreTest::test_clone, test/distributed/test_store.py::PrefixTCPStoreTest::test_compare_set, test/distributed/test_store.py::PrefixTCPStoreTest::test_list_keys, test/distributed/test_store.py::PrefixTCPStoreTest::test_multi_get, test/distributed/test_store.py::PrefixTCPStoreTest::test_multi_set, test/distributed/test_store.py::PrefixTCPStoreTest::test_queues, test/distributed/test_store.py::PrefixTCPStoreTest::test_queues_bidirectional, test/distributed/test_store.py::PrefixTCPStoreTest::test_queues_nonblocking, test/distributed/test_store.py::PrefixTCPStoreTest::test_queues_timeout, test/distributed/test_store.py::PrefixTCPStoreTest::test_set_get_check, test/distributed/test_store.py::PrefixTCPStoreTest::test_simple_wait, test/distributed/test_store.py::PrefixTCPStoreTest::test_underlying_non_prefix_store, test/distributed/test_store.py::PythonStoreTest::test_set_get, test/distributed/test_store.py::RendezvousTest::test_unknown_handler, test/distributed/test_store.py::RendezvousTest::test_url_with_node_params, test/distributed/test_store.py::RendezvousEnvTest::test_nominal, test/distributed/test_store.py::RendezvousFileTest::test_common_errors, test/distributed/test_store.py::RendezvousFileTest::test_nominal, test/distributed/test_store.py::RendezvousTCPTest::test_common_errors, test/distributed/test_store.py::RendezvousTCPTest::test_dns_timeout, test/distributed/test_store.py::RendezvousTCPTest::test_nominal, test/distributed/test_store.py::RendezvousTCPTest::test_tcp_store_timeout_doest_break_client, test/distributed/test_store.py::RendezvousTCPTest::test_tcp_store_timeout_set, test/distributed/test_store.py::RendezvousTCPTest::test_tcp_store_url_with_libuv, test/distributed/test_store.py::TestPythonStore::test_append_roundtrip, test/distributed/test_store.py::TestPythonStore::test_extended_methods_fallbacks, test/distributed/test_store.py::TestPythonStore::test_has_extended_api_passthrough, test/distributed/test_store.py::TestPythonStore::test_has_extended_api_roundtrip, test/distributed/test_store.py::TestPythonStore::test_multi_get_roundtrip, test/distributed/test_store.py::TestPythonStore::test_multi_set_roundtrip, test/distributed/test_store.py::TestPythonStore::test_optional_methods_fail, test/distributed/test_store.py::TestMultiThreadedWait::test_wait_file_store, test/distributed/test_store.py::TestMultiThreadedWait::test_wait_hash_store, test/distributed/test_store.py::TestMultiThreadedWait::test_wait_prefix_file_store, test/distributed/test_store.py::TestMultiThreadedWait::test_wait_tcp_store, test/distributed/test_store.py::TestMultiThreadedWait::test_wait_tcp_store_uv, test/distributed/test_store.py::TimeoutTest::test_interrupt_doesnt_break_wait, test/distributed/test_store.py::InitPgWithNonUvStore::test_with_env_var, test/distributed/test_store.py::InitPgWithNonUvStore::test_with_url_param, test/distributed/test_store.py::TestClientProtocol::test_client_connect 2025-12-04T12:36:53.2009037Z Running 1 items in this shard: test/distributed/test_store.py::FileStoreTest::test_append 2025-12-04T12:36:53.2010039Z Running 1 items in this shard: test/distributed/test_store.py::FileStoreTest::test_clone 2025-12-04T12:36:53.2010970Z Running 1 items in this shard: test/distributed/test_store.py::FileStoreTest::test_compare_set 2025-12-04T12:36:53.2011895Z Running 1 items in this shard: test/distributed/test_store.py::FileStoreTest::test_init_pg_and_rpc_with_same_file 2025-12-04T12:36:53.2012809Z Running 1 items in this shard: test/distributed/test_store.py::FileStoreTest::test_list_keys 2025-12-04T12:36:53.2013629Z Running 1 items in this shard: test/distributed/test_store.py::FileStoreTest::test_multi_get 2025-12-04T12:36:53.2014444Z Running 1 items in this shard: test/distributed/test_store.py::FileStoreTest::test_multi_set 2025-12-04T12:36:53.2015242Z Running 1 items in this shard: test/distributed/test_store.py::FileStoreTest::test_queues 2025-12-04T12:36:53.2016099Z Running 1 items in this shard: test/distributed/test_store.py::FileStoreTest::test_queues_bidirectional 2025-12-04T12:36:53.2017021Z Running 1 items in this shard: test/distributed/test_store.py::FileStoreTest::test_queues_nonblocking 2025-12-04T12:36:53.2017910Z Running 1 items in this shard: test/distributed/test_store.py::FileStoreTest::test_queues_timeout 2025-12-04T12:36:53.2018750Z Running 1 items in this shard: test/distributed/test_store.py::FileStoreTest::test_refcount 2025-12-04T12:36:53.2019651Z Running 1 items in this shard: test/distributed/test_store.py::FileStoreTest::test_set_get_check 2025-12-04T12:36:53.2020510Z Running 1 items in this shard: test/distributed/test_store.py::FileStoreTest::test_simple_wait 2025-12-04T12:36:53.2021442Z Running 1 items in this shard: test/distributed/test_store.py::HashStoreTest::test_append 2025-12-04T12:36:53.2022228Z Running 1 items in this shard: test/distributed/test_store.py::HashStoreTest::test_clone 2025-12-04T12:36:53.2023049Z Running 1 items in this shard: test/distributed/test_store.py::HashStoreTest::test_compare_set 2025-12-04T12:36:53.2023882Z Running 1 items in this shard: test/distributed/test_store.py::HashStoreTest::test_list_keys 2025-12-04T12:36:53.2024700Z Running 1 items in this shard: test/distributed/test_store.py::HashStoreTest::test_multi_get 2025-12-04T12:36:53.2025521Z Running 1 items in this shard: test/distributed/test_store.py::HashStoreTest::test_multi_set 2025-12-04T12:36:53.2026331Z Running 1 items in this shard: test/distributed/test_store.py::HashStoreTest::test_queues 2025-12-04T12:36:53.2027184Z Running 1 items in this shard: test/distributed/test_store.py::HashStoreTest::test_queues_bidirectional 2025-12-04T12:36:53.2028094Z Running 1 items in this shard: test/distributed/test_store.py::HashStoreTest::test_queues_nonblocking 2025-12-04T12:36:53.2028984Z Running 1 items in this shard: test/distributed/test_store.py::HashStoreTest::test_queues_timeout 2025-12-04T12:36:53.2029840Z Running 1 items in this shard: test/distributed/test_store.py::HashStoreTest::test_set_get_check 2025-12-04T12:36:53.2030741Z Running 1 items in this shard: test/distributed/test_store.py::HashStoreTest::test_simple_wait 2025-12-04T12:36:53.2031630Z Running 1 items in this shard: test/distributed/test_store.py::PrefixStoreTest::test_get_underlying_store 2025-12-04T12:36:53.2032536Z Running 1 items in this shard: test/distributed/test_store.py::PrefixFileStoreTest::test_append 2025-12-04T12:36:53.2033396Z Running 1 items in this shard: test/distributed/test_store.py::PrefixFileStoreTest::test_clone 2025-12-04T12:36:53.2034267Z Running 1 items in this shard: test/distributed/test_store.py::PrefixFileStoreTest::test_compare_set 2025-12-04T12:36:53.2035165Z Running 1 items in this shard: test/distributed/test_store.py::PrefixFileStoreTest::test_list_keys 2025-12-04T12:36:53.2036048Z Running 1 items in this shard: test/distributed/test_store.py::PrefixFileStoreTest::test_multi_get 2025-12-04T12:36:53.2037066Z Running 1 items in this shard: test/distributed/test_store.py::PrefixFileStoreTest::test_multi_set 2025-12-04T12:36:53.2038034Z Running 1 items in this shard: test/distributed/test_store.py::PrefixFileStoreTest::test_queues 2025-12-04T12:36:53.2038955Z Running 1 items in this shard: test/distributed/test_store.py::PrefixFileStoreTest::test_queues_bidirectional 2025-12-04T12:36:53.2039914Z Running 1 items in this shard: test/distributed/test_store.py::PrefixFileStoreTest::test_queues_nonblocking 2025-12-04T12:36:53.2040908Z Running 1 items in this shard: test/distributed/test_store.py::PrefixFileStoreTest::test_queues_timeout 2025-12-04T12:36:53.2041815Z Running 1 items in this shard: test/distributed/test_store.py::PrefixFileStoreTest::test_set_get_check 2025-12-04T12:36:53.2042724Z Running 1 items in this shard: test/distributed/test_store.py::PrefixFileStoreTest::test_simple_wait 2025-12-04T12:36:53.2043636Z Running 1 items in this shard: test/distributed/test_store.py::TCPStoreTest::test_address_already_in_use 2025-12-04T12:36:53.2044507Z Running 1 items in this shard: test/distributed/test_store.py::TCPStoreTest::test_agent_store 2025-12-04T12:36:53.2045315Z Running 1 items in this shard: test/distributed/test_store.py::TCPStoreTest::test_append 2025-12-04T12:36:53.2046102Z Running 1 items in this shard: test/distributed/test_store.py::TCPStoreTest::test_clone 2025-12-04T12:36:53.2046922Z Running 1 items in this shard: test/distributed/test_store.py::TCPStoreTest::test_compare_set 2025-12-04T12:36:53.2047918Z Running 1 items in this shard: test/distributed/test_store.py::TCPStoreTest::test_init_pg_and_rpc_with_same_socket 2025-12-04T12:36:53.2048831Z Running 1 items in this shard: test/distributed/test_store.py::TCPStoreTest::test_list_keys 2025-12-04T12:36:53.2049715Z Running 1 items in this shard: test/distributed/test_store.py::TCPStoreTest::test_multi_get 2025-12-04T12:36:53.2050519Z Running 1 items in this shard: test/distributed/test_store.py::TCPStoreTest::test_multi_set 2025-12-04T12:36:53.2051493Z Running 1 items in this shard: test/distributed/test_store.py::TCPStoreTest::test_multi_worker_with_fixed_world_size 2025-12-04T12:36:53.2052541Z Running 1 items in this shard: test/distributed/test_store.py::TCPStoreTest::test_multi_worker_with_nonfixed_world_size 2025-12-04T12:36:53.2053499Z Running 1 items in this shard: test/distributed/test_store.py::TCPStoreTest::test_multitenancy 2025-12-04T12:36:53.2054353Z Running 1 items in this shard: test/distributed/test_store.py::TCPStoreTest::test_numkeys_delkeys 2025-12-04T12:36:53.2055169Z Running 1 items in this shard: test/distributed/test_store.py::TCPStoreTest::test_queues 2025-12-04T12:36:53.2056016Z Running 1 items in this shard: test/distributed/test_store.py::TCPStoreTest::test_queues_bidirectional 2025-12-04T12:36:53.2056923Z Running 1 items in this shard: test/distributed/test_store.py::TCPStoreTest::test_queues_nonblocking 2025-12-04T12:36:53.2057801Z Running 1 items in this shard: test/distributed/test_store.py::TCPStoreTest::test_queues_timeout 2025-12-04T12:36:53.2058611Z Running 1 items in this shard: test/distributed/test_store.py::TCPStoreTest::test_repr 2025-12-04T12:36:53.2059424Z Running 1 items in this shard: test/distributed/test_store.py::TCPStoreTest::test_set_get_check 2025-12-04T12:36:53.2060270Z Running 1 items in this shard: test/distributed/test_store.py::TCPStoreTest::test_simple_wait 2025-12-04T12:36:53.2061252Z Running 1 items in this shard: test/distributed/test_store.py::TCPStoreTest::test_store_timeout_on_missing_clients 2025-12-04T12:36:53.2062217Z Running 1 items in this shard: test/distributed/test_store.py::TCPStoreTest::test_take_over_listen_socket 2025-12-04T12:36:53.2063133Z Running 1 items in this shard: test/distributed/test_store.py::TCPStoreTest::test_world_size_0_raises 2025-12-04T12:36:53.2064067Z Running 1 items in this shard: test/distributed/test_store.py::LibUvTCPStoreTest::test_address_already_in_use 2025-12-04T12:36:53.2064989Z Running 1 items in this shard: test/distributed/test_store.py::LibUvTCPStoreTest::test_agent_store 2025-12-04T12:36:53.2065851Z Running 1 items in this shard: test/distributed/test_store.py::LibUvTCPStoreTest::test_append 2025-12-04T12:36:53.2066777Z Running 1 items in this shard: test/distributed/test_store.py::LibUvTCPStoreTest::test_clone 2025-12-04T12:36:53.2067637Z Running 1 items in this shard: test/distributed/test_store.py::LibUvTCPStoreTest::test_compare_set 2025-12-04T12:36:53.2068607Z Running 1 items in this shard: test/distributed/test_store.py::LibUvTCPStoreTest::test_init_pg_and_rpc_with_same_socket 2025-12-04T12:36:53.2069574Z Running 1 items in this shard: test/distributed/test_store.py::LibUvTCPStoreTest::test_list_keys 2025-12-04T12:36:53.2070421Z Running 1 items in this shard: test/distributed/test_store.py::LibUvTCPStoreTest::test_multi_get 2025-12-04T12:36:53.2071331Z Running 1 items in this shard: test/distributed/test_store.py::LibUvTCPStoreTest::test_multi_set 2025-12-04T12:36:53.2072294Z Running 1 items in this shard: test/distributed/test_store.py::LibUvTCPStoreTest::test_multi_worker_with_fixed_world_size 2025-12-04T12:36:53.2073380Z Running 1 items in this shard: test/distributed/test_store.py::LibUvTCPStoreTest::test_multi_worker_with_nonfixed_world_size 2025-12-04T12:36:53.2074378Z Running 1 items in this shard: test/distributed/test_store.py::LibUvTCPStoreTest::test_multitenancy 2025-12-04T12:36:53.2075280Z Running 1 items in this shard: test/distributed/test_store.py::LibUvTCPStoreTest::test_numkeys_delkeys 2025-12-04T12:36:53.2076194Z Running 1 items in this shard: test/distributed/test_store.py::LibUvTCPStoreTest::test_queues 2025-12-04T12:36:53.2077089Z Running 1 items in this shard: test/distributed/test_store.py::LibUvTCPStoreTest::test_queues_bidirectional 2025-12-04T12:36:53.2078076Z Running 1 items in this shard: test/distributed/test_store.py::LibUvTCPStoreTest::test_queues_nonblocking 2025-12-04T12:36:53.2078991Z Running 1 items in this shard: test/distributed/test_store.py::LibUvTCPStoreTest::test_queues_timeout 2025-12-04T12:36:53.2079849Z Running 1 items in this shard: test/distributed/test_store.py::LibUvTCPStoreTest::test_repr 2025-12-04T12:36:53.2080757Z Running 1 items in this shard: test/distributed/test_store.py::LibUvTCPStoreTest::test_set_get_check 2025-12-04T12:36:53.2081648Z Running 1 items in this shard: test/distributed/test_store.py::LibUvTCPStoreTest::test_simple_wait 2025-12-04T12:36:53.2082620Z Running 1 items in this shard: test/distributed/test_store.py::LibUvTCPStoreTest::test_store_timeout_on_missing_clients 2025-12-04T12:36:53.2083639Z Running 1 items in this shard: test/distributed/test_store.py::LibUvTCPStoreTest::test_take_over_listen_socket 2025-12-04T12:36:53.2084582Z Running 1 items in this shard: test/distributed/test_store.py::LibUvTCPStoreTest::test_world_size_0_raises 2025-12-04T12:36:53.2085512Z Running 1 items in this shard: test/distributed/test_store.py::PrefixTCPStoreTest::test_append 2025-12-04T12:36:53.2086359Z Running 1 items in this shard: test/distributed/test_store.py::PrefixTCPStoreTest::test_clone 2025-12-04T12:36:53.2087224Z Running 1 items in this shard: test/distributed/test_store.py::PrefixTCPStoreTest::test_compare_set 2025-12-04T12:36:53.2088112Z Running 1 items in this shard: test/distributed/test_store.py::PrefixTCPStoreTest::test_list_keys 2025-12-04T12:36:53.2088978Z Running 1 items in this shard: test/distributed/test_store.py::PrefixTCPStoreTest::test_multi_get 2025-12-04T12:36:53.2089840Z Running 1 items in this shard: test/distributed/test_store.py::PrefixTCPStoreTest::test_multi_set 2025-12-04T12:36:53.2090768Z Running 1 items in this shard: test/distributed/test_store.py::PrefixTCPStoreTest::test_queues 2025-12-04T12:36:53.2091806Z Running 1 items in this shard: test/distributed/test_store.py::PrefixTCPStoreTest::test_queues_bidirectional 2025-12-04T12:36:53.2105241Z Running 1 items in this shard: test/distributed/test_store.py::PrefixTCPStoreTest::test_queues_nonblocking 2025-12-04T12:36:53.2106190Z Running 1 items in this shard: test/distributed/test_store.py::PrefixTCPStoreTest::test_queues_timeout 2025-12-04T12:36:53.2107096Z Running 1 items in this shard: test/distributed/test_store.py::PrefixTCPStoreTest::test_set_get_check 2025-12-04T12:36:53.2108193Z Running 1 items in this shard: test/distributed/test_store.py::PrefixTCPStoreTest::test_simple_wait 2025-12-04T12:36:53.2109163Z Running 1 items in this shard: test/distributed/test_store.py::PrefixTCPStoreTest::test_underlying_non_prefix_store 2025-12-04T12:36:53.2110099Z Running 1 items in this shard: test/distributed/test_store.py::PythonStoreTest::test_set_get 2025-12-04T12:36:53.2111028Z Running 1 items in this shard: test/distributed/test_store.py::RendezvousTest::test_unknown_handler 2025-12-04T12:36:53.2111936Z Running 1 items in this shard: test/distributed/test_store.py::RendezvousTest::test_url_with_node_params 2025-12-04T12:36:53.2112827Z Running 1 items in this shard: test/distributed/test_store.py::RendezvousEnvTest::test_nominal 2025-12-04T12:36:53.2113703Z Running 1 items in this shard: test/distributed/test_store.py::RendezvousFileTest::test_common_errors 2025-12-04T12:36:53.2114577Z Running 1 items in this shard: test/distributed/test_store.py::RendezvousFileTest::test_nominal 2025-12-04T12:36:53.2115469Z Running 1 items in this shard: test/distributed/test_store.py::RendezvousTCPTest::test_common_errors 2025-12-04T12:36:53.2116361Z Running 1 items in this shard: test/distributed/test_store.py::RendezvousTCPTest::test_dns_timeout 2025-12-04T12:36:53.2117223Z Running 1 items in this shard: test/distributed/test_store.py::RendezvousTCPTest::test_nominal 2025-12-04T12:36:53.2118244Z Running 1 items in this shard: test/distributed/test_store.py::RendezvousTCPTest::test_tcp_store_timeout_doest_break_client 2025-12-04T12:36:53.2119329Z Running 1 items in this shard: test/distributed/test_store.py::RendezvousTCPTest::test_tcp_store_timeout_set 2025-12-04T12:36:53.2120299Z Running 1 items in this shard: test/distributed/test_store.py::RendezvousTCPTest::test_tcp_store_url_with_libuv 2025-12-04T12:36:53.2121314Z Running 1 items in this shard: test/distributed/test_store.py::TestPythonStore::test_append_roundtrip 2025-12-04T12:36:53.2122268Z Running 1 items in this shard: test/distributed/test_store.py::TestPythonStore::test_extended_methods_fallbacks 2025-12-04T12:36:53.2123260Z Running 1 items in this shard: test/distributed/test_store.py::TestPythonStore::test_has_extended_api_passthrough 2025-12-04T12:36:53.2124307Z Running 1 items in this shard: test/distributed/test_store.py::TestPythonStore::test_has_extended_api_roundtrip 2025-12-04T12:36:53.2125340Z Running 1 items in this shard: test/distributed/test_store.py::TestPythonStore::test_multi_get_roundtrip 2025-12-04T12:36:53.2126321Z Running 1 items in this shard: test/distributed/test_store.py::TestPythonStore::test_multi_set_roundtrip 2025-12-04T12:36:53.2127274Z Running 1 items in this shard: test/distributed/test_store.py::TestPythonStore::test_optional_methods_fail 2025-12-04T12:36:53.2128221Z Running 1 items in this shard: test/distributed/test_store.py::TestMultiThreadedWait::test_wait_file_store 2025-12-04T12:36:53.2129169Z Running 1 items in this shard: test/distributed/test_store.py::TestMultiThreadedWait::test_wait_hash_store 2025-12-04T12:36:53.2130149Z Running 1 items in this shard: test/distributed/test_store.py::TestMultiThreadedWait::test_wait_prefix_file_store 2025-12-04T12:36:53.2131183Z Running 1 items in this shard: test/distributed/test_store.py::TestMultiThreadedWait::test_wait_tcp_store 2025-12-04T12:36:53.2132147Z Running 1 items in this shard: test/distributed/test_store.py::TestMultiThreadedWait::test_wait_tcp_store_uv 2025-12-04T12:36:53.2133115Z Running 1 items in this shard: test/distributed/test_store.py::TimeoutTest::test_interrupt_doesnt_break_wait 2025-12-04T12:36:53.2134053Z Running 1 items in this shard: test/distributed/test_store.py::InitPgWithNonUvStore::test_with_env_var 2025-12-04T12:36:53.2134972Z Running 1 items in this shard: test/distributed/test_store.py::InitPgWithNonUvStore::test_with_url_param 2025-12-04T12:36:53.2135890Z Running 1 items in this shard: test/distributed/test_store.py::TestClientProtocol::test_client_connect 2025-12-04T12:36:53.2136423Z 2025-12-04T12:36:53.2136891Z Finished distributed/test_store 1/1 ... [2025-12-04 12:36:53.193570][4976842.043501509], took 5.07min 2025-12-04T12:36:53.2138273Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T12:36:53.2139588Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T12:36:53.2140318Z Running distributed/test_c10d_nccl 1/2 ... [2025-12-04 12:36:53.202025][4976842.051957786] 2025-12-04T12:36:53.2140981Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T12:36:53.2142335Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/test_c10d_nccl.py', '--shard-id=1', '--num-shards=2', '-v', '--subprocess', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 12:36:53.202501] 2025-12-04T12:51:51.4991872Z 2025-12-04T12:51:51.4993241Z distributed/test_c10d_nccl 1/2 was successful, full logs can be found in artifacts with path test/test-reports/distributed.test_c10d_nccl_1.2_834272db4a870bb0_.log 2025-12-04T12:51:51.5041331Z Running 104 items in this shard: test/distributed/test_c10d_nccl.py::RendezvousEnvTest::test_common_errors, test/distributed/test_c10d_nccl.py::ProcessGroupNCCLNoGPUTest::test_init_no_gpus, test/distributed/test_c10d_nccl.py::ProcessGroupNCCLInitTest::test_scalable_init, test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_abort_in_destroy_mixed_empty_pgs, test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_abort_pg, test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_close_pg_eager_init_True, test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_comm_split_group, test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_comm_split_group_mixed_backend, test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_comm_split_subgroup, test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_destruct_before_terminate_pg, test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_extend_nccl_pg_timeout_backend_nccl, test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_extra_cuda_context, test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_extra_cuda_context_sync_ops, test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_file_store_check, test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_nan_assert_bfloat16, test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_nan_assert_float16, test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_nan_assert_float32, test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_nan_assert_float64, test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_nan_assert_float8_e4m3fn, test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_nan_assert_float8_e5m2, test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_nan_rank_filter, test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_nccl_dist_backend_error, test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_restart_pg, test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_set_nccl_pg_timeout_backend0, test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_shrink_group_multiple_comms, test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_shrink_group_multiple_exclusions, test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_shrink_group_multiple_iterations, test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_shrink_group_vs_abort_reinit_performance, test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_arbitrary_forward_return_value, test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_arbitrary_forward_return_value_grad_is_view, test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_bf16_compress_wrapper_is_view, test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_builtin_ddp_comm_hooks_nccl_grad_is_view, test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_channels_last_contig, test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_ddp_checkpointing_dynamic_weight_sharing, test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_ddp_checkpointing_twice_static_graph_use_reentrant_True, test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_ddp_checkpointing_twice_weight_sharing, test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_ddp_checkpointing_weight_sharing_use_reentrant_False, test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_ddp_comm_hook_allreduce_hook_nccl, test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_ddp_comm_hook_allreduce_with_then_hook_nccl, test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_ddp_comm_hook_future_passing_gpu_nccl, test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_ddp_complex_params_and_grads, test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_ddp_multi_device_module_config, test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_ddp_weight_sharing, test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_failure_recovery, test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_find_unused_parameters_kwarg_debug_info, test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_find_unused_parameters_kwarg_debug_off, test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_find_unused_parameters_kwarg_grad_is_view_debug_detail, test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_find_unused_parameters_kwarg_grad_is_view_debug_info, test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_find_unused_parameters_kwarg_grad_is_view_debug_off, test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_fp16, test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_invalid_powerSGD_state, test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_nccl_backend_multi_device_module_device_ids_None, test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_nccl_backend_single_device_module_device_ids_None, test/distributed/test_c10d_nccl.py::WorkHookTest::test_on_completion_hook_with_ddp, test/distributed/test_c10d_nccl.py::NcclErrorHandlingTest::test_error_detection_and_propagation, test/distributed/test_c10d_nccl.py::NcclErrorHandlingTest::test_invalid_nccl_blocking_wait_env, test/distributed/test_c10d_nccl.py::NcclErrorHandlingTest::test_nccl_errors_nonblocking, test/distributed/test_c10d_nccl.py::NcclUserBufferRegistrationTest::test_nccl_user_buffer_registration, test/distributed/test_c10d_nccl.py::CommTest::test_all_reduce_coalesced_manager_nccl, test/distributed/test_c10d_nccl.py::CommTest::test_intra_node_comm_all_reduce, test/distributed/test_c10d_nccl.py::CommTest::test_nccl_barrier, test/distributed/test_c10d_nccl.py::CommTest::test_nccl_barrier_device_ids, test/distributed/test_c10d_nccl.py::CommTest::test_nccl_warn_not_in_group_debug_off, test/distributed/test_c10d_nccl.py::CommTest::test_reduce_scatter_base_k, test/distributed/test_c10d_nccl.py::CommTest::test_sequence_num_incremented_nccl_subgroup, test/distributed/test_c10d_nccl.py::CommTest::test_sequence_num_set_default_pg_nccl, test/distributed/test_c10d_nccl.py::CommTest::test_time_estimate_nccl, test/distributed/test_c10d_nccl.py::CommTest::test_unwaited, test/distributed/test_c10d_nccl.py::NcclProcessGroupWithDispatchedCollectivesTests::test_allgather_base, test/distributed/test_c10d_nccl.py::NcclProcessGroupWithDispatchedCollectivesTests::test_allgather_float8_float8_e4m3fn, test/distributed/test_c10d_nccl.py::NcclProcessGroupWithDispatchedCollectivesTests::test_allgather_float8_float8_e5m2, test/distributed/test_c10d_nccl.py::NcclProcessGroupWithDispatchedCollectivesTests::test_collectives, test/distributed/test_c10d_nccl.py::LargeCommTest::test_batch_send_recv_subgroup_group_rank_False, test/distributed/test_c10d_nccl.py::LargeCommTest::test_batch_send_recv_subgroup_group_rank_True, test/distributed/test_c10d_nccl.py::LargeCommTest::test_broadcast_object_list_subgroup_set_device0_group_rank_True, test/distributed/test_c10d_nccl.py::LargeCommTest::test_broadcast_subgroup_group_rank_True, test/distributed/test_c10d_nccl.py::LargeCommTest::test_gather_subgroup_group_rank_True, test/distributed/test_c10d_nccl.py::LargeCommTest::test_reduce_subgroup_group_rank_False, test/distributed/test_c10d_nccl.py::LargeCommTest::test_reduce_subgroup_group_rank_True, test/distributed/test_c10d_nccl.py::LargeCommTest::test_scatter_object_list_subgroup_group_rank_False, test/distributed/test_c10d_nccl.py::LargeCommTest::test_scatter_object_list_subgroup_group_rank_True, test/distributed/test_c10d_nccl.py::LargeCommTest::test_scatter_subgroup_group_rank_True, test/distributed/test_c10d_nccl.py::LargeCommTest::test_send_recv_object_list_subgroup_set_device1_group_rank_True, test/distributed/test_c10d_nccl.py::LargeCommTest::test_send_recv_subgroup_group_rank_False_async_op_False, test/distributed/test_c10d_nccl.py::LargeCommTest::test_send_recv_subgroup_group_rank_True_async_op_True, test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_allgather_uneven_timing_enabled_True, test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_batched_send_recv_op_sizes_per_coalesce0_timing_enabled_True, test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_coalescing_manager_collective_timing_enabled_False, test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_fr_record_reset_circular_buffer_full_timing_enabled_False, test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_fr_record_reset_partial_overwrite_timing_enabled_False, test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_fr_record_reset_partial_overwrite_timing_enabled_True, test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_fr_record_reset_timing_enabled_True, test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_fr_record_reset_wraparound_timing_enabled_False, test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_fr_record_reset_wraparound_timing_enabled_True, test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_individual_send_recv_op_sizes1_timing_enabled_False, test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_short_json_timing_enabled_False_include_collectives_False, test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_short_pickle_timing_enabled_False_include_collectives_False, test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_short_pickle_timing_enabled_False_include_collectives_True, test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_short_pickle_timing_enabled_True_include_collectives_True, test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_trace_while_active_timing_enabled_False_only_active_True, test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_trace_while_active_timing_enabled_True_only_active_False, test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_trace_while_stuck_timing_enabled_False, test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_trace_while_stuck_timing_enabled_True, test/distributed/test_c10d_nccl.py::ProcessGroupNCCLLargerScaleTest::test_comm_recursive_split_group 2025-12-04T12:51:51.5088013Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::RendezvousEnvTest::test_common_errors 2025-12-04T12:51:51.5089012Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::ProcessGroupNCCLNoGPUTest::test_init_no_gpus 2025-12-04T12:51:51.5090021Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::ProcessGroupNCCLInitTest::test_scalable_init 2025-12-04T12:51:51.5091171Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_abort_in_destroy_mixed_empty_pgs 2025-12-04T12:51:51.5092249Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_abort_pg 2025-12-04T12:51:51.5093449Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_close_pg_eager_init_True 2025-12-04T12:51:51.5094521Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_comm_split_group 2025-12-04T12:51:51.5095623Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_comm_split_group_mixed_backend 2025-12-04T12:51:51.5096739Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_comm_split_subgroup 2025-12-04T12:51:51.5097843Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_destruct_before_terminate_pg 2025-12-04T12:51:51.5099019Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_extend_nccl_pg_timeout_backend_nccl 2025-12-04T12:51:51.5100134Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_extra_cuda_context 2025-12-04T12:51:51.5101320Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_extra_cuda_context_sync_ops 2025-12-04T12:51:51.5102397Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_file_store_check 2025-12-04T12:51:51.5103486Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_nan_assert_bfloat16 2025-12-04T12:51:51.5104518Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_nan_assert_float16 2025-12-04T12:51:51.5105604Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_nan_assert_float32 2025-12-04T12:51:51.5106634Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_nan_assert_float64 2025-12-04T12:51:51.5107704Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_nan_assert_float8_e4m3fn 2025-12-04T12:51:51.5108798Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_nan_assert_float8_e5m2 2025-12-04T12:51:51.5109842Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_nan_rank_filter 2025-12-04T12:51:51.5110948Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_nccl_dist_backend_error 2025-12-04T12:51:51.5111972Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_restart_pg 2025-12-04T12:51:51.5113041Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_set_nccl_pg_timeout_backend0 2025-12-04T12:51:51.5114173Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_shrink_group_multiple_comms 2025-12-04T12:51:51.5115338Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_shrink_group_multiple_exclusions 2025-12-04T12:51:51.5116521Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_shrink_group_multiple_iterations 2025-12-04T12:51:51.5117740Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::ProcessGroupNCCLGroupTest::test_shrink_group_vs_abort_reinit_performance 2025-12-04T12:51:51.5118965Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_arbitrary_forward_return_value 2025-12-04T12:51:51.5120210Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_arbitrary_forward_return_value_grad_is_view 2025-12-04T12:51:51.5121519Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_bf16_compress_wrapper_is_view 2025-12-04T12:51:51.5122743Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_builtin_ddp_comm_hooks_nccl_grad_is_view 2025-12-04T12:51:51.5124057Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_channels_last_contig 2025-12-04T12:51:51.5125237Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_ddp_checkpointing_dynamic_weight_sharing 2025-12-04T12:51:51.5126585Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_ddp_checkpointing_twice_static_graph_use_reentrant_True 2025-12-04T12:51:51.5127924Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_ddp_checkpointing_twice_weight_sharing 2025-12-04T12:51:51.5129247Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_ddp_checkpointing_weight_sharing_use_reentrant_False 2025-12-04T12:51:51.5130539Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_ddp_comm_hook_allreduce_hook_nccl 2025-12-04T12:51:51.5131844Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_ddp_comm_hook_allreduce_with_then_hook_nccl 2025-12-04T12:51:51.5133127Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_ddp_comm_hook_future_passing_gpu_nccl 2025-12-04T12:51:51.5134382Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_ddp_complex_params_and_grads 2025-12-04T12:51:51.5135551Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_ddp_multi_device_module_config 2025-12-04T12:51:51.5136727Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_ddp_weight_sharing 2025-12-04T12:51:51.5137776Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_failure_recovery 2025-12-04T12:51:51.5138973Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_find_unused_parameters_kwarg_debug_info 2025-12-04T12:51:51.5140231Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_find_unused_parameters_kwarg_debug_off 2025-12-04T12:51:51.5141588Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_find_unused_parameters_kwarg_grad_is_view_debug_detail 2025-12-04T12:51:51.5142976Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_find_unused_parameters_kwarg_grad_is_view_debug_info 2025-12-04T12:51:51.5144343Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_find_unused_parameters_kwarg_grad_is_view_debug_off 2025-12-04T12:51:51.5145499Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_fp16 2025-12-04T12:51:51.5146536Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_invalid_powerSGD_state 2025-12-04T12:51:51.5147743Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_nccl_backend_multi_device_module_device_ids_None 2025-12-04T12:51:51.5149078Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::DistributedDataParallelTest::test_nccl_backend_single_device_module_device_ids_None 2025-12-04T12:51:51.5150243Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::WorkHookTest::test_on_completion_hook_with_ddp 2025-12-04T12:51:51.5151363Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::NcclErrorHandlingTest::test_error_detection_and_propagation 2025-12-04T12:51:51.5152485Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::NcclErrorHandlingTest::test_invalid_nccl_blocking_wait_env 2025-12-04T12:51:51.5153643Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::NcclErrorHandlingTest::test_nccl_errors_nonblocking 2025-12-04T12:51:51.5154772Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::NcclUserBufferRegistrationTest::test_nccl_user_buffer_registration 2025-12-04T12:51:51.5155881Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::CommTest::test_all_reduce_coalesced_manager_nccl 2025-12-04T12:51:51.5156862Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::CommTest::test_intra_node_comm_all_reduce 2025-12-04T12:51:51.5157759Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::CommTest::test_nccl_barrier 2025-12-04T12:51:51.5158644Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::CommTest::test_nccl_barrier_device_ids 2025-12-04T12:51:51.5159600Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::CommTest::test_nccl_warn_not_in_group_debug_off 2025-12-04T12:51:51.5160554Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::CommTest::test_reduce_scatter_base_k 2025-12-04T12:51:51.5161622Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::CommTest::test_sequence_num_incremented_nccl_subgroup 2025-12-04T12:51:51.5162638Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::CommTest::test_sequence_num_set_default_pg_nccl 2025-12-04T12:51:51.5163623Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::CommTest::test_time_estimate_nccl 2025-12-04T12:51:51.5164473Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::CommTest::test_unwaited 2025-12-04T12:51:51.5165557Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::NcclProcessGroupWithDispatchedCollectivesTests::test_allgather_base 2025-12-04T12:51:51.5166908Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::NcclProcessGroupWithDispatchedCollectivesTests::test_allgather_float8_float8_e4m3fn 2025-12-04T12:51:51.5168338Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::NcclProcessGroupWithDispatchedCollectivesTests::test_allgather_float8_float8_e5m2 2025-12-04T12:51:51.5169659Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::NcclProcessGroupWithDispatchedCollectivesTests::test_collectives 2025-12-04T12:51:51.5170896Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::LargeCommTest::test_batch_send_recv_subgroup_group_rank_False 2025-12-04T12:51:51.5172023Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::LargeCommTest::test_batch_send_recv_subgroup_group_rank_True 2025-12-04T12:51:51.5173226Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::LargeCommTest::test_broadcast_object_list_subgroup_set_device0_group_rank_True 2025-12-04T12:51:51.5174402Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::LargeCommTest::test_broadcast_subgroup_group_rank_True 2025-12-04T12:51:51.5175450Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::LargeCommTest::test_gather_subgroup_group_rank_True 2025-12-04T12:51:51.5176487Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::LargeCommTest::test_reduce_subgroup_group_rank_False 2025-12-04T12:51:51.5177515Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::LargeCommTest::test_reduce_subgroup_group_rank_True 2025-12-04T12:51:51.5178600Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::LargeCommTest::test_scatter_object_list_subgroup_group_rank_False 2025-12-04T12:51:51.5179763Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::LargeCommTest::test_scatter_object_list_subgroup_group_rank_True 2025-12-04T12:51:51.5180903Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::LargeCommTest::test_scatter_subgroup_group_rank_True 2025-12-04T12:51:51.5182053Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::LargeCommTest::test_send_recv_object_list_subgroup_set_device1_group_rank_True 2025-12-04T12:51:51.5183403Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::LargeCommTest::test_send_recv_subgroup_group_rank_False_async_op_False 2025-12-04T12:51:51.5184588Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::LargeCommTest::test_send_recv_subgroup_group_rank_True_async_op_True 2025-12-04T12:51:51.5185710Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_allgather_uneven_timing_enabled_True 2025-12-04T12:51:51.5186888Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_batched_send_recv_op_sizes_per_coalesce0_timing_enabled_True 2025-12-04T12:51:51.5188139Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_coalescing_manager_collective_timing_enabled_False 2025-12-04T12:51:51.5189361Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_fr_record_reset_circular_buffer_full_timing_enabled_False 2025-12-04T12:51:51.5190654Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_fr_record_reset_partial_overwrite_timing_enabled_False 2025-12-04T12:51:51.5191880Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_fr_record_reset_partial_overwrite_timing_enabled_True 2025-12-04T12:51:51.5193018Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_fr_record_reset_timing_enabled_True 2025-12-04T12:51:51.5194174Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_fr_record_reset_wraparound_timing_enabled_False 2025-12-04T12:51:51.5195382Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_fr_record_reset_wraparound_timing_enabled_True 2025-12-04T12:51:51.5196561Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_individual_send_recv_op_sizes1_timing_enabled_False 2025-12-04T12:51:51.5197783Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_short_json_timing_enabled_False_include_collectives_False 2025-12-04T12:51:51.5199047Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_short_pickle_timing_enabled_False_include_collectives_False 2025-12-04T12:51:51.5200315Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_short_pickle_timing_enabled_False_include_collectives_True 2025-12-04T12:51:51.5201619Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_short_pickle_timing_enabled_True_include_collectives_True 2025-12-04T12:51:51.5202866Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_trace_while_active_timing_enabled_False_only_active_True 2025-12-04T12:51:51.5204108Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_trace_while_active_timing_enabled_True_only_active_False 2025-12-04T12:51:51.5205270Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_trace_while_stuck_timing_enabled_False 2025-12-04T12:51:51.5206353Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::NCCLTraceTest::test_trace_while_stuck_timing_enabled_True 2025-12-04T12:51:51.5207491Z Running 1 items in this shard: test/distributed/test_c10d_nccl.py::ProcessGroupNCCLLargerScaleTest::test_comm_recursive_split_group 2025-12-04T12:51:51.5208159Z 2025-12-04T12:51:51.5208548Z Finished distributed/test_c10d_nccl 1/2 ... [2025-12-04 12:51:51.501426][4977740.351356289], took 14.97min 2025-12-04T12:51:51.5209935Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T12:51:51.5211312Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T12:51:51.5212023Z GITHUB_RUN_ID, GITHUB_RUN_ATTEMPT, or ARTIFACTS_FILE_SUFFIX not set, not uploading 2025-12-04T12:51:51.5212614Z Uploading artifacts took 0.00 seconds 2025-12-04T12:51:51.5213348Z Running distributed/elastic/timer/api_test 1/1 ... [2025-12-04 12:51:51.510009][4977740.359942235] 2025-12-04T12:51:51.5214002Z SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set 2025-12-04T12:51:51.5215334Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'distributed/elastic/timer/api_test.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=0', '--import-slow-tests', '--import-disabled-tests'] ... [2025-12-04 12:51:51.510471] 2025-12-04T12:51:52.3906650Z 2025-12-04T12:51:52.3908257Z distributed/elastic/timer/api_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/distributed.elastic.timer.api_test_1.1_4d6f2400e39c0637_.log 2025-12-04T12:51:52.3909281Z 2025-12-04T12:51:52.3909745Z Finished distributed/elastic/timer/api_test 1/1 ... [2025-12-04 12:51:52.390247][4977741.240181971], took 0.01min 2025-12-04T12:51:52.3958226Z Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_inductor_collectives/distributed.test_inductor_collectives-522d9376131b79d6.xml 2025-12-04T12:51:52.3983647Z Failed to parse and upload json test reports: Unable to locate credentials 2025-12-04T12:51:54.5558967Z Running test batch 'tests to run' cost 9335.31 seconds 2025-12-04T12:51:54.5568069Z Emitting td_test_failure_stats_v2 2025-12-04T12:51:54.5572305Z Writing 1 documents to S3 ossci-raw-job-status/ossci_uploaded_metrics/td_test_failure_stats_v2_1764852714_02d34f02d11011f09e219a09c9033007 2025-12-04T12:51:56.5927193Z /var/lib/jenkins/pytorch/tools/stats/upload_metrics.py:156: UserWarning: Error uploading metric td_test_failure_stats_v2 to DynamoDB: Unable to locate credentials 2025-12-04T12:51:56.5928918Z warn(f"Error uploading metric {metric_name} to DynamoDB: {e}") 2025-12-04T12:51:56.5935413Z Emitting td_test_failure_stats_v2 2025-12-04T12:51:56.5939839Z Writing 1 documents to S3 ossci-raw-job-status/ossci_uploaded_metrics/td_test_failure_stats_v2_1764852716_040a0e42d11011f09e219a09c9033007 2025-12-04T12:51:56.5996740Z Emitting td_test_failure_stats_v2 2025-12-04T12:51:56.5998044Z Writing 1 documents to S3 ossci-raw-job-status/ossci_uploaded_metrics/td_test_failure_stats_v2_1764852716_040af942d11011f09e219a09c9033007 2025-12-04T12:51:56.6049953Z distributed/fsdp/test_fsdp_uneven 1/1 failed! 2025-12-04T12:51:56.6050485Z distributed/fsdp/test_fsdp_comm 1/1 failed! 2025-12-04T12:51:56.6051065Z distributed/fsdp/test_fsdp_clip_grad_norm 1/1 failed! 2025-12-04T12:51:57.1507242Z 2025-12-04T12:51:57.1507616Z real 155m40.952s 2025-12-04T12:51:57.1508005Z user 383m34.924s 2025-12-04T12:51:57.1508304Z sys 439m44.135s 2025-12-04T12:51:57.1508607Z + sccache_epilogue 2025-12-04T12:51:57.1508996Z + echo '::group::Sccache Compilation Log' 2025-12-04T12:51:57.1509832Z ##[group]Sccache Compilation Log 2025-12-04T12:51:57.1510312Z + echo '=================== sccache compilation log ===================' 2025-12-04T12:51:57.1511025Z =================== sccache compilation log =================== 2025-12-04T12:51:57.1511795Z + python /var/lib/jenkins/pytorch/.ci/pytorch/print_sccache_log.py /var/lib/jenkins/sccache_error.log 2025-12-04T12:51:57.1600754Z + echo '=========== If your build fails, please take a look at the log above for possible reasons ===========' 2025-12-04T12:51:57.1601578Z =========== If your build fails, please take a look at the log above for possible reasons =========== 2025-12-04T12:51:57.1602174Z + sccache --show-stats 2025-12-04T12:51:57.1633393Z Compile requests 906 2025-12-04T12:51:57.1633839Z Compile requests executed 0 2025-12-04T12:51:57.1634243Z Cache hits 0 2025-12-04T12:51:57.1634625Z Cache misses 0 2025-12-04T12:51:57.1635014Z Cache hits rate - 2025-12-04T12:51:57.1635403Z Cache timeouts 0 2025-12-04T12:51:57.1635789Z Cache read errors 0 2025-12-04T12:51:57.1636171Z Forced recaches 0 2025-12-04T12:51:57.1636554Z Cache write errors 0 2025-12-04T12:51:57.1637259Z Cache errors 0 2025-12-04T12:51:57.1637646Z Compilations 0 2025-12-04T12:51:57.1638040Z Compilation failures 0 2025-12-04T12:51:57.1638448Z Non-cacheable compilations 0 2025-12-04T12:51:57.1638855Z Non-cacheable calls 8 2025-12-04T12:51:57.1639254Z Non-compilation calls 898 2025-12-04T12:51:57.1639659Z Unsupported compiler calls 0 2025-12-04T12:51:57.1640070Z Average cache write 0.000 s 2025-12-04T12:51:57.1640486Z Average compiler 0.000 s 2025-12-04T12:51:57.1640971Z Average cache read hit 0.000 s 2025-12-04T12:51:57.1641389Z Failed distributed compilations 0 2025-12-04T12:51:57.1641664Z 2025-12-04T12:51:57.1641804Z Non-cacheable reasons: 2025-12-04T12:51:57.1642153Z -E 8 2025-12-04T12:51:57.1642405Z 2025-12-04T12:51:57.1642681Z Cache location Local disk: "/var/lib/jenkins/.cache/sccache" 2025-12-04T12:51:57.1643219Z Use direct/preprocessor mode? yes 2025-12-04T12:51:57.1643633Z Version (client) 0.10.0 2025-12-04T12:51:57.1644040Z Max cache size 10 GiB 2025-12-04T12:51:57.1644448Z + sccache --stop-server 2025-12-04T12:51:57.1665453Z Stopping sccache server... 2025-12-04T12:51:57.1669022Z Compile requests 906 2025-12-04T12:51:57.1669452Z Compile requests executed 0 2025-12-04T12:51:57.1669845Z Cache hits 0 2025-12-04T12:51:57.1670293Z Cache misses 0 2025-12-04T12:51:57.1670762Z Cache hits rate - 2025-12-04T12:51:57.1671148Z Cache timeouts 0 2025-12-04T12:51:57.1671528Z Cache read errors 0 2025-12-04T12:51:57.1671908Z Forced recaches 0 2025-12-04T12:51:57.1672288Z Cache write errors 0 2025-12-04T12:51:57.1672674Z Cache errors 0 2025-12-04T12:51:57.1673061Z Compilations 0 2025-12-04T12:51:57.1673446Z Compilation failures 0 2025-12-04T12:51:57.1673850Z Non-cacheable compilations 0 2025-12-04T12:51:57.1674247Z Non-cacheable calls 8 2025-12-04T12:51:57.1674646Z Non-compilation calls 898 2025-12-04T12:51:57.1675047Z Unsupported compiler calls 0 2025-12-04T12:51:57.1675454Z Average cache write 0.000 s 2025-12-04T12:51:57.1675872Z Average compiler 0.000 s 2025-12-04T12:51:57.1676280Z Average cache read hit 0.000 s 2025-12-04T12:51:57.1676692Z Failed distributed compilations 0 2025-12-04T12:51:57.1676965Z 2025-12-04T12:51:57.1677096Z Non-cacheable reasons: 2025-12-04T12:51:57.1677430Z -E 8 2025-12-04T12:51:57.1677677Z 2025-12-04T12:51:57.1677929Z Cache location Local disk: "/var/lib/jenkins/.cache/sccache" 2025-12-04T12:51:57.1678459Z Use direct/preprocessor mode? yes 2025-12-04T12:51:57.1678858Z Version (client) 0.10.0 2025-12-04T12:51:57.1679256Z Max cache size 10 GiB 2025-12-04T12:51:57.1679658Z + echo ::endgroup:: 2025-12-04T12:51:57.1680208Z ##[endgroup] 2025-12-04T12:51:57.1744586Z ##[error]Process completed with exit code 1. 2025-12-04T12:51:57.1814781Z ##[group]Run # copy test results back to the mounted workspace, needed sudo, resulting permissions were correct 2025-12-04T12:51:57.1815822Z # copy test results back to the mounted workspace, needed sudo, resulting permissions were correct 2025-12-04T12:51:57.1817097Z docker exec -t "f376f08e81f7dfe3b6a525fadd8605d64876caf592501f7ac6f3aa383436ff61" sh -c "cd ../pytorch && sudo cp -R test/test-reports ../workspace/test" 2025-12-04T12:51:57.1827718Z shell: /usr/bin/bash -e {0} 2025-12-04T12:51:57.1828079Z env: 2025-12-04T12:51:57.1828384Z GIT_DEFAULT_BRANCH: main 2025-12-04T12:51:57.1828822Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T12:51:57.1829390Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T12:51:57.1829919Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T12:51:57.1832100Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T12:51:57.1833724Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T12:51:57.1834096Z AWS_REGION: us-east-1 2025-12-04T12:51:57.1834570Z AWS_ACCESS_KEY_ID: *** 2025-12-04T12:51:57.1835053Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T12:51:57.1842094Z AWS_SESSION_TOKEN: *** 2025-12-04T12:51:57.1842645Z CONTAINER_NAME: f376f08e81f7dfe3b6a525fadd8605d64876caf592501f7ac6f3aa383436ff61 2025-12-04T12:51:57.1843239Z ##[endgroup] 2025-12-04T12:51:57.2585237Z ##[group]Run docker exec -t "f376f08e81f7dfe3b6a525fadd8605d64876caf592501f7ac6f3aa383436ff61" sh -c "sudo chown -R 1001:1001 test" 2025-12-04T12:51:57.2586524Z docker exec -t "f376f08e81f7dfe3b6a525fadd8605d64876caf592501f7ac6f3aa383436ff61" sh -c "sudo chown -R 1001:1001 test" 2025-12-04T12:51:57.2596588Z shell: /usr/bin/bash -e {0} 2025-12-04T12:51:57.2596941Z env: 2025-12-04T12:51:57.2597232Z GIT_DEFAULT_BRANCH: main 2025-12-04T12:51:57.2597672Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T12:51:57.2598359Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T12:51:57.2598898Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T12:51:57.2600671Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T12:51:57.2602293Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T12:51:57.2602666Z AWS_REGION: us-east-1 2025-12-04T12:51:57.2603119Z AWS_ACCESS_KEY_ID: *** 2025-12-04T12:51:57.2603603Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T12:51:57.2610868Z AWS_SESSION_TOKEN: *** 2025-12-04T12:51:57.2611427Z CONTAINER_NAME: f376f08e81f7dfe3b6a525fadd8605d64876caf592501f7ac6f3aa383436ff61 2025-12-04T12:51:57.2612020Z ##[endgroup] 2025-12-04T12:51:57.3513289Z ##[group]Run cat test/**/*_toprint.log || true 2025-12-04T12:51:57.3513824Z cat test/**/*_toprint.log || true 2025-12-04T12:51:57.3523836Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-12-04T12:51:57.3524320Z env: 2025-12-04T12:51:57.3524633Z GIT_DEFAULT_BRANCH: main 2025-12-04T12:51:57.3525066Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T12:51:57.3525632Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T12:51:57.3526172Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T12:51:57.3527842Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T12:51:57.3529482Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T12:51:57.3529914Z AWS_REGION: us-east-1 2025-12-04T12:51:57.3530395Z AWS_ACCESS_KEY_ID: *** 2025-12-04T12:51:57.3531234Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T12:51:57.3538596Z AWS_SESSION_TOKEN: *** 2025-12-04T12:51:57.3539150Z CONTAINER_NAME: f376f08e81f7dfe3b6a525fadd8605d64876caf592501f7ac6f3aa383436ff61 2025-12-04T12:51:57.3539746Z ##[endgroup] 2025-12-04T12:51:57.3619906Z cat: 'test/**/*_toprint.log': No such file or directory 2025-12-04T12:51:57.3794605Z Prepare all required actions 2025-12-04T12:51:57.3795941Z Getting action download info 2025-12-04T12:51:57.8357706Z Download action repository 'seemethere/upload-artifact-s3@v5' (SHA:baba72d0712b404f646cebe0730933554ebce96a) 2025-12-04T12:51:58.7108378Z Download action repository 'actions/upload-artifact@v4' (SHA:ea165f8d65b6e75b540449e92b4886f43607fa02) 2025-12-04T12:51:59.6125454Z ##[group]Run ./.github/actions/upload-test-artifacts 2025-12-04T12:51:59.6125600Z with: 2025-12-04T12:51:59.6125689Z use-gha: true 2025-12-04T12:51:59.6125843Z file-suffix: test-distributed-1-3-linux.rocm.gpu.gfx942.4.b_57116213174 2025-12-04T12:51:59.6126021Z s3-bucket: gha-artifacts 2025-12-04T12:51:59.6126124Z env: 2025-12-04T12:51:59.6126211Z GIT_DEFAULT_BRANCH: main 2025-12-04T12:51:59.6126343Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T12:51:59.6126522Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T12:51:59.6126703Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T12:51:59.6127196Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T12:51:59.6127676Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T12:51:59.6127870Z AWS_REGION: us-east-1 2025-12-04T12:51:59.6128020Z AWS_ACCESS_KEY_ID: *** 2025-12-04T12:51:59.6128166Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T12:51:59.6130245Z AWS_SESSION_TOKEN: *** 2025-12-04T12:51:59.6130474Z CONTAINER_NAME: f376f08e81f7dfe3b6a525fadd8605d64876caf592501f7ac6f3aa383436ff61 2025-12-04T12:51:59.6130712Z ##[endgroup] 2025-12-04T12:51:59.6159417Z ##[group]Run actions/upload-artifact@v4 2025-12-04T12:51:59.6159541Z with: 2025-12-04T12:51:59.6159733Z name: test-jsons-runattempt1-test-distributed-1-3-linux.rocm.gpu.gfx942.4.b_57116213174.zip 2025-12-04T12:51:59.6159942Z retention-days: 14 2025-12-04T12:51:59.6160049Z if-no-files-found: warn 2025-12-04T12:51:59.6160157Z path: test/**/*.json 2025-12-04T12:51:59.6160258Z compression-level: 6 2025-12-04T12:51:59.6160358Z overwrite: false 2025-12-04T12:51:59.6160459Z include-hidden-files: false 2025-12-04T12:51:59.6160563Z env: 2025-12-04T12:51:59.6160695Z GIT_DEFAULT_BRANCH: main 2025-12-04T12:51:59.6160830Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T12:51:59.6161003Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T12:51:59.6161168Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T12:51:59.6161671Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T12:51:59.6162158Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T12:51:59.6162273Z AWS_REGION: us-east-1 2025-12-04T12:51:59.6162400Z AWS_ACCESS_KEY_ID: *** 2025-12-04T12:51:59.6162545Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T12:51:59.6164663Z AWS_SESSION_TOKEN: *** 2025-12-04T12:51:59.6164831Z CONTAINER_NAME: f376f08e81f7dfe3b6a525fadd8605d64876caf592501f7ac6f3aa383436ff61 2025-12-04T12:51:59.6165014Z ##[endgroup] 2025-12-04T12:52:00.0771509Z With the provided path, there will be 6 files uploaded 2025-12-04T12:52:00.0775132Z Artifact name is valid! 2025-12-04T12:52:00.0775560Z Root directory input is valid! 2025-12-04T12:52:00.3294945Z Beginning upload of artifact content to blob storage 2025-12-04T12:52:00.7505662Z Uploaded bytes 44615 2025-12-04T12:52:00.8282162Z Finished uploading artifact content to blob storage! 2025-12-04T12:52:00.8286864Z SHA256 digest of uploaded artifact zip is b530d42e8ba6b1619df186b40b7eb519932713f516eb73f258cab28d71860e23 2025-12-04T12:52:00.8289768Z Finalizing artifact upload 2025-12-04T12:52:00.9685184Z Artifact test-jsons-runattempt1-test-distributed-1-3-linux.rocm.gpu.gfx942.4.b_57116213174.zip.zip successfully finalized. Artifact ID 4764075393 2025-12-04T12:52:00.9686918Z Artifact test-jsons-runattempt1-test-distributed-1-3-linux.rocm.gpu.gfx942.4.b_57116213174.zip has been successfully uploaded! Final size is 44615 bytes. Artifact ID is 4764075393 2025-12-04T12:52:00.9695849Z Artifact download URL: https://github.com/pytorch/pytorch/actions/runs/19922849170/artifacts/4764075393 2025-12-04T12:52:00.9885572Z ##[group]Run actions/upload-artifact@v4 2025-12-04T12:52:00.9886003Z with: 2025-12-04T12:52:00.9886668Z name: test-reports-runattempt1-test-distributed-1-3-linux.rocm.gpu.gfx942.4.b_57116213174.zip 2025-12-04T12:52:00.9887417Z retention-days: 14 2025-12-04T12:52:00.9887786Z if-no-files-found: ignore 2025-12-04T12:52:00.9888175Z path: test/**/*.xml test/**/*.csv 2025-12-04T12:52:00.9888589Z compression-level: 6 2025-12-04T12:52:00.9888947Z overwrite: false 2025-12-04T12:52:00.9889293Z include-hidden-files: false 2025-12-04T12:52:00.9889650Z env: 2025-12-04T12:52:00.9889945Z GIT_DEFAULT_BRANCH: main 2025-12-04T12:52:00.9890393Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T12:52:00.9891064Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T12:52:00.9891614Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T12:52:00.9893591Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T12:52:00.9895331Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T12:52:00.9895712Z AWS_REGION: us-east-1 2025-12-04T12:52:00.9896224Z AWS_ACCESS_KEY_ID: *** 2025-12-04T12:52:00.9896730Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T12:52:00.9903822Z AWS_SESSION_TOKEN: *** 2025-12-04T12:52:00.9904398Z CONTAINER_NAME: f376f08e81f7dfe3b6a525fadd8605d64876caf592501f7ac6f3aa383436ff61 2025-12-04T12:52:00.9904992Z ##[endgroup] 2025-12-04T12:52:01.5396548Z With the provided path, there will be 831 files uploaded 2025-12-04T12:52:01.5398684Z Artifact name is valid! 2025-12-04T12:52:01.5399089Z Root directory input is valid! 2025-12-04T12:52:01.7784477Z Beginning upload of artifact content to blob storage 2025-12-04T12:52:02.6147552Z Uploaded bytes 613114 2025-12-04T12:52:02.6902221Z Finished uploading artifact content to blob storage! 2025-12-04T12:52:02.6906748Z SHA256 digest of uploaded artifact zip is 8cc4f776fc8aaae297935c59753c71aea522471478843f03d58a3fe4a44430ca 2025-12-04T12:52:02.6909067Z Finalizing artifact upload 2025-12-04T12:52:02.8340153Z Artifact test-reports-runattempt1-test-distributed-1-3-linux.rocm.gpu.gfx942.4.b_57116213174.zip.zip successfully finalized. Artifact ID 4764075700 2025-12-04T12:52:02.8342066Z Artifact test-reports-runattempt1-test-distributed-1-3-linux.rocm.gpu.gfx942.4.b_57116213174.zip has been successfully uploaded! Final size is 613114 bytes. Artifact ID is 4764075700 2025-12-04T12:52:02.8349841Z Artifact download URL: https://github.com/pytorch/pytorch/actions/runs/19922849170/artifacts/4764075700 2025-12-04T12:52:02.8594564Z ##[group]Run actions/upload-artifact@v4 2025-12-04T12:52:02.8595016Z with: 2025-12-04T12:52:02.8595609Z name: logs-runattempt1-test-distributed-1-3-linux.rocm.gpu.gfx942.4.b_57116213174.zip 2025-12-04T12:52:02.8596296Z retention-days: 14 2025-12-04T12:52:02.8596671Z if-no-files-found: ignore 2025-12-04T12:52:02.8597077Z path: usage_log.txt test/**/*.log 2025-12-04T12:52:02.8597507Z compression-level: 6 2025-12-04T12:52:02.8597858Z overwrite: false 2025-12-04T12:52:02.8598220Z include-hidden-files: false 2025-12-04T12:52:02.8598596Z env: 2025-12-04T12:52:02.8598902Z GIT_DEFAULT_BRANCH: main 2025-12-04T12:52:02.8599368Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T12:52:02.8600236Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T12:52:02.8600854Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T12:52:02.8602547Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T12:52:02.8604194Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T12:52:02.8604595Z AWS_REGION: us-east-1 2025-12-04T12:52:02.8605060Z AWS_ACCESS_KEY_ID: *** 2025-12-04T12:52:02.8605587Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T12:52:02.8613193Z AWS_SESSION_TOKEN: *** 2025-12-04T12:52:02.8613775Z CONTAINER_NAME: f376f08e81f7dfe3b6a525fadd8605d64876caf592501f7ac6f3aa383436ff61 2025-12-04T12:52:02.8614379Z ##[endgroup] 2025-12-04T12:52:03.3746790Z Multiple search paths detected. Calculating the least common ancestor of all paths 2025-12-04T12:52:03.3748218Z The least common ancestor is /home/runner/_work/pytorch/pytorch. This will be the root directory of the artifact 2025-12-04T12:52:03.3749046Z With the provided path, there will be 94 files uploaded 2025-12-04T12:52:03.3750670Z Artifact name is valid! 2025-12-04T12:52:03.3751074Z Root directory input is valid! 2025-12-04T12:52:03.6201618Z Beginning upload of artifact content to blob storage 2025-12-04T12:52:04.2496679Z Uploaded bytes 286099 2025-12-04T12:52:04.3242278Z Finished uploading artifact content to blob storage! 2025-12-04T12:52:04.3246619Z SHA256 digest of uploaded artifact zip is bd7bbdfa5b09112680ad7c1090abd400fc2723b4c3fa53095e8c8911a9880d47 2025-12-04T12:52:04.3249122Z Finalizing artifact upload 2025-12-04T12:52:04.4763262Z Artifact logs-runattempt1-test-distributed-1-3-linux.rocm.gpu.gfx942.4.b_57116213174.zip.zip successfully finalized. Artifact ID 4764075941 2025-12-04T12:52:04.4764881Z Artifact logs-runattempt1-test-distributed-1-3-linux.rocm.gpu.gfx942.4.b_57116213174.zip has been successfully uploaded! Final size is 286099 bytes. Artifact ID is 4764075941 2025-12-04T12:52:04.4773471Z Artifact download URL: https://github.com/pytorch/pytorch/actions/runs/19922849170/artifacts/4764075941 2025-12-04T12:52:04.4979252Z ##[group]Run # shellcheck disable=SC2156 2025-12-04T12:52:04.4979808Z # shellcheck disable=SC2156 2025-12-04T12:52:04.4980580Z find . -iname "core.[1-9]*" -exec docker exec "${CONTAINER_NAME}" sh -c "gdb python {} -ex 'bt' -ex 'q'" \; 2025-12-04T12:52:04.4990702Z shell: /usr/bin/bash -e {0} 2025-12-04T12:52:04.4991097Z env: 2025-12-04T12:52:04.4991427Z GIT_DEFAULT_BRANCH: main 2025-12-04T12:52:04.4991917Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T12:52:04.4992546Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T12:52:04.4993115Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T12:52:04.4994868Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T12:52:04.4996519Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T12:52:04.4996925Z AWS_REGION: us-east-1 2025-12-04T12:52:04.4997427Z AWS_ACCESS_KEY_ID: *** 2025-12-04T12:52:04.4997963Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T12:52:04.5005092Z AWS_SESSION_TOKEN: *** 2025-12-04T12:52:04.5005692Z CONTAINER_NAME: f376f08e81f7dfe3b6a525fadd8605d64876caf592501f7ac6f3aa383436ff61 2025-12-04T12:52:04.5006357Z ##[endgroup] 2025-12-04T12:52:04.6424705Z ##[group]Run actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 2025-12-04T12:52:04.6425334Z with: 2025-12-04T12:52:04.6425812Z name: coredumps-distributed-1-3-linux.rocm.gpu.gfx942.4.b 2025-12-04T12:52:04.6426375Z retention-days: 14 2025-12-04T12:52:04.6426762Z if-no-files-found: ignore 2025-12-04T12:52:04.6427169Z path: ./**/core.[1-9]* 2025-12-04T12:52:04.6427549Z compression-level: 6 2025-12-04T12:52:04.6427908Z overwrite: false 2025-12-04T12:52:04.6428279Z include-hidden-files: false 2025-12-04T12:52:04.6428677Z env: 2025-12-04T12:52:04.6429003Z GIT_DEFAULT_BRANCH: main 2025-12-04T12:52:04.6429491Z RUNNER_ARTIFACT_DIR: /home/runner/_work/_temp/artifacts 2025-12-04T12:52:04.6430134Z RUNNER_TEST_RESULTS_DIR: /home/runner/_work/_temp/test-results 2025-12-04T12:52:04.6430792Z RUNNER_DOCS_DIR: /home/runner/_work/_temp/docs 2025-12-04T12:52:04.6432567Z GPU_FLAG: --device=/dev/mem --device=/dev/kfd --group-add 110 --device /dev/dri/renderD128 --device /dev/dri/renderD136 --device /dev/dri/renderD144 --device /dev/dri/renderD152 --group-add video --group-add 109 --group-add daemon --group-add bin --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host 2025-12-04T12:52:04.6434251Z AWS_DEFAULT_REGION: us-east-1 2025-12-04T12:52:04.6434672Z AWS_REGION: us-east-1 2025-12-04T12:52:04.6435191Z AWS_ACCESS_KEY_ID: *** 2025-12-04T12:52:04.6435736Z AWS_SECRET_ACCESS_KEY: *** 2025-12-04T12:52:04.6443397Z AWS_SESSION_TOKEN: *** 2025-12-04T12:52:04.6444000Z CONTAINER_NAME: f376f08e81f7dfe3b6a525fadd8605d64876caf592501f7ac6f3aa383436ff61 2025-12-04T12:52:04.6444628Z ##[endgroup] 2025-12-04T12:52:09.8999676Z No files were found with the provided path: ./**/core.[1-9]*. No artifacts will be uploaded. 2025-12-04T12:52:09.9255275Z Post job cleanup. 2025-12-04T12:52:09.9294479Z Post job cleanup. 2025-12-04T12:52:09.9507088Z Logging out of registry 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-12-04T12:52:09.9760329Z Post job cleanup. 2025-12-04T12:52:10.0489349Z Post job cleanup. 2025-12-04T12:52:10.0553112Z Post job cleanup. 2025-12-04T12:52:10.1031742Z [command]/usr/bin/git version 2025-12-04T12:52:10.1071904Z git version 2.52.0 2025-12-04T12:52:10.1097517Z Copying '/home/runner/.gitconfig' to '/home/runner/_work/_temp/9b525761-5e36-40de-ab5d-8a7ab4d6bf07/.gitconfig' 2025-12-04T12:52:10.1104525Z Temporarily overriding HOME='/home/runner/_work/_temp/9b525761-5e36-40de-ab5d-8a7ab4d6bf07' before making global git config changes 2025-12-04T12:52:10.1105603Z Adding repository directory to the temporary git global config as a safe directory 2025-12-04T12:52:10.1106940Z [command]/usr/bin/git config --global --add safe.directory /home/runner/_work/pytorch/pytorch 2025-12-04T12:52:10.1144371Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand 2025-12-04T12:52:10.1176060Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :" 2025-12-04T12:52:10.1468621Z Entering 'android/libs/fbjni' 2025-12-04T12:52:10.1498064Z Entering 'third_party/FP16' 2025-12-04T12:52:10.1529043Z Entering 'third_party/FXdiv' 2025-12-04T12:52:10.1565185Z Entering 'third_party/NNPACK' 2025-12-04T12:52:10.1601521Z Entering 'third_party/NVTX' 2025-12-04T12:52:10.1627706Z Entering 'third_party/VulkanMemoryAllocator' 2025-12-04T12:52:10.1679765Z Entering 'third_party/XNNPACK' 2025-12-04T12:52:10.1738040Z Entering 'third_party/aiter' 2025-12-04T12:52:10.1779905Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-12-04T12:52:10.1819505Z Entering 'third_party/benchmark' 2025-12-04T12:52:10.1873293Z Entering 'third_party/composable_kernel' 2025-12-04T12:52:10.1907971Z Entering 'third_party/cpp-httplib' 2025-12-04T12:52:10.1932740Z Entering 'third_party/cpuinfo' 2025-12-04T12:52:10.1960162Z Entering 'third_party/cudnn_frontend' 2025-12-04T12:52:10.1997051Z Entering 'third_party/cutlass' 2025-12-04T12:52:10.2025176Z Entering 'third_party/fbgemm' 2025-12-04T12:52:10.2051926Z Entering 'third_party/fbgemm/external/asmjit' 2025-12-04T12:52:10.2090928Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-12-04T12:52:10.2140465Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-12-04T12:52:10.2180045Z Entering 'third_party/fbgemm/external/cutlass' 2025-12-04T12:52:10.2223186Z Entering 'third_party/fbgemm/external/googletest' 2025-12-04T12:52:10.2267780Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-12-04T12:52:10.2295325Z Entering 'third_party/fbgemm/external/json' 2025-12-04T12:52:10.2327671Z Entering 'third_party/flash-attention' 2025-12-04T12:52:10.2376775Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-12-04T12:52:10.2408966Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-12-04T12:52:10.2456363Z Entering 'third_party/flatbuffers' 2025-12-04T12:52:10.2489330Z Entering 'third_party/fmt' 2025-12-04T12:52:10.2532936Z Entering 'third_party/gemmlowp/gemmlowp' 2025-12-04T12:52:10.2573360Z Entering 'third_party/gloo' 2025-12-04T12:52:10.2614383Z Entering 'third_party/googletest' 2025-12-04T12:52:10.2645361Z Entering 'third_party/ideep' 2025-12-04T12:52:10.2681968Z Entering 'third_party/ideep/mkl-dnn' 2025-12-04T12:52:10.2721446Z Entering 'third_party/ittapi' 2025-12-04T12:52:10.2753573Z Entering 'third_party/kineto' 2025-12-04T12:52:10.2788658Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-12-04T12:52:10.2818751Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-12-04T12:52:10.2852113Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-12-04T12:52:10.2893078Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-12-04T12:52:10.2942010Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-12-04T12:52:10.2968604Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-12-04T12:52:10.3025018Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-12-04T12:52:10.3062332Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-12-04T12:52:10.3089227Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-12-04T12:52:10.3115766Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-12-04T12:52:10.3150230Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp' 2025-12-04T12:52:10.3190031Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T12:52:10.3216717Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T12:52:10.3260708Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-12-04T12:52:10.3284272Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-12-04T12:52:10.3312661Z Entering 'third_party/kleidiai' 2025-12-04T12:52:10.3339922Z Entering 'third_party/mimalloc' 2025-12-04T12:52:10.3382384Z Entering 'third_party/nlohmann' 2025-12-04T12:52:10.3412265Z Entering 'third_party/onnx' 2025-12-04T12:52:10.3474909Z Entering 'third_party/onnx/third_party/pybind11' 2025-12-04T12:52:10.3529930Z Entering 'third_party/opentelemetry-cpp' 2025-12-04T12:52:10.3577334Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-12-04T12:52:10.3605438Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-12-04T12:52:10.3648192Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-12-04T12:52:10.3698018Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-12-04T12:52:10.3747934Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-12-04T12:52:10.3805549Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-12-04T12:52:10.3851198Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-12-04T12:52:10.3888564Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T12:52:10.3924257Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T12:52:10.3968530Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-12-04T12:52:10.4014731Z Entering 'third_party/pocketfft' 2025-12-04T12:52:10.4056184Z Entering 'third_party/protobuf' 2025-12-04T12:52:10.4107137Z Entering 'third_party/protobuf/third_party/benchmark' 2025-12-04T12:52:10.4154611Z Entering 'third_party/protobuf/third_party/googletest' 2025-12-04T12:52:10.4204762Z Entering 'third_party/psimd' 2025-12-04T12:52:10.4241187Z Entering 'third_party/pthreadpool' 2025-12-04T12:52:10.4264851Z Entering 'third_party/pybind11' 2025-12-04T12:52:10.4290039Z Entering 'third_party/python-peachpy' 2025-12-04T12:52:10.4333452Z Entering 'third_party/sleef' 2025-12-04T12:52:10.4377630Z Entering 'third_party/tensorpipe' 2025-12-04T12:52:10.4403126Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-12-04T12:52:10.4439095Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-12-04T12:52:10.4468451Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-12-04T12:52:10.4492902Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-12-04T12:52:10.4530691Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-12-04T12:52:10.4602614Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader 2025-12-04T12:52:10.4633141Z http.https://github.com/.extraheader 2025-12-04T12:52:10.4652554Z [command]/usr/bin/git config --local --unset-all http.https://github.com/.extraheader 2025-12-04T12:52:10.4689532Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || :" 2025-12-04T12:52:10.4946581Z Entering 'android/libs/fbjni' 2025-12-04T12:52:10.4970294Z http.https://github.com/.extraheader 2025-12-04T12:52:10.5002752Z Entering 'third_party/FP16' 2025-12-04T12:52:10.5017486Z http.https://github.com/.extraheader 2025-12-04T12:52:10.5035457Z Entering 'third_party/FXdiv' 2025-12-04T12:52:10.5049706Z http.https://github.com/.extraheader 2025-12-04T12:52:10.5066920Z Entering 'third_party/NNPACK' 2025-12-04T12:52:10.5082459Z http.https://github.com/.extraheader 2025-12-04T12:52:10.5098251Z Entering 'third_party/NVTX' 2025-12-04T12:52:10.5117684Z http.https://github.com/.extraheader 2025-12-04T12:52:10.5143252Z Entering 'third_party/VulkanMemoryAllocator' 2025-12-04T12:52:10.5163546Z http.https://github.com/.extraheader 2025-12-04T12:52:10.5178806Z Entering 'third_party/XNNPACK' 2025-12-04T12:52:10.5193007Z http.https://github.com/.extraheader 2025-12-04T12:52:10.5230536Z Entering 'third_party/aiter' 2025-12-04T12:52:10.5249404Z http.https://github.com/.extraheader 2025-12-04T12:52:10.5280273Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-12-04T12:52:10.5316305Z http.https://github.com/.extraheader 2025-12-04T12:52:10.5339415Z Entering 'third_party/benchmark' 2025-12-04T12:52:10.5369381Z http.https://github.com/.extraheader 2025-12-04T12:52:10.5390204Z Entering 'third_party/composable_kernel' 2025-12-04T12:52:10.5416046Z http.https://github.com/.extraheader 2025-12-04T12:52:10.5464856Z Entering 'third_party/cpp-httplib' 2025-12-04T12:52:10.5492933Z http.https://github.com/.extraheader 2025-12-04T12:52:10.5539820Z Entering 'third_party/cpuinfo' 2025-12-04T12:52:10.5565870Z http.https://github.com/.extraheader 2025-12-04T12:52:10.5594282Z Entering 'third_party/cudnn_frontend' 2025-12-04T12:52:10.5616688Z http.https://github.com/.extraheader 2025-12-04T12:52:10.5633831Z Entering 'third_party/cutlass' 2025-12-04T12:52:10.5653697Z http.https://github.com/.extraheader 2025-12-04T12:52:10.5682782Z Entering 'third_party/fbgemm' 2025-12-04T12:52:10.5706763Z http.https://github.com/.extraheader 2025-12-04T12:52:10.5744272Z Entering 'third_party/fbgemm/external/asmjit' 2025-12-04T12:52:10.5761292Z http.https://github.com/.extraheader 2025-12-04T12:52:10.5778596Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-12-04T12:52:10.5791843Z http.https://github.com/.extraheader 2025-12-04T12:52:10.5811015Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-12-04T12:52:10.5823003Z http.https://github.com/.extraheader 2025-12-04T12:52:10.5839634Z Entering 'third_party/fbgemm/external/cutlass' 2025-12-04T12:52:10.5851284Z http.https://github.com/.extraheader 2025-12-04T12:52:10.5883713Z Entering 'third_party/fbgemm/external/googletest' 2025-12-04T12:52:10.5900845Z http.https://github.com/.extraheader 2025-12-04T12:52:10.5939864Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-12-04T12:52:10.5980152Z http.https://github.com/.extraheader 2025-12-04T12:52:10.6007792Z Entering 'third_party/fbgemm/external/json' 2025-12-04T12:52:10.6042490Z http.https://github.com/.extraheader 2025-12-04T12:52:10.6083753Z Entering 'third_party/flash-attention' 2025-12-04T12:52:10.6111998Z http.https://github.com/.extraheader 2025-12-04T12:52:10.6132109Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-12-04T12:52:10.6167501Z http.https://github.com/.extraheader 2025-12-04T12:52:10.6199281Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-12-04T12:52:10.6228787Z http.https://github.com/.extraheader 2025-12-04T12:52:10.6279409Z Entering 'third_party/flatbuffers' 2025-12-04T12:52:10.6303599Z http.https://github.com/.extraheader 2025-12-04T12:52:10.6335264Z Entering 'third_party/fmt' 2025-12-04T12:52:10.6348351Z http.https://github.com/.extraheader 2025-12-04T12:52:10.6376600Z Entering 'third_party/gemmlowp/gemmlowp' 2025-12-04T12:52:10.6390391Z http.https://github.com/.extraheader 2025-12-04T12:52:10.6410290Z Entering 'third_party/gloo' 2025-12-04T12:52:10.6433799Z http.https://github.com/.extraheader 2025-12-04T12:52:10.6451063Z Entering 'third_party/googletest' 2025-12-04T12:52:10.6463725Z http.https://github.com/.extraheader 2025-12-04T12:52:10.6480763Z Entering 'third_party/ideep' 2025-12-04T12:52:10.6493984Z http.https://github.com/.extraheader 2025-12-04T12:52:10.6531165Z Entering 'third_party/ideep/mkl-dnn' 2025-12-04T12:52:10.6559356Z http.https://github.com/.extraheader 2025-12-04T12:52:10.6590505Z Entering 'third_party/ittapi' 2025-12-04T12:52:10.6604647Z http.https://github.com/.extraheader 2025-12-04T12:52:10.6623462Z Entering 'third_party/kineto' 2025-12-04T12:52:10.6637147Z http.https://github.com/.extraheader 2025-12-04T12:52:10.6653678Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-12-04T12:52:10.6687563Z http.https://github.com/.extraheader 2025-12-04T12:52:10.6718750Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-12-04T12:52:10.6736874Z http.https://github.com/.extraheader 2025-12-04T12:52:10.6770031Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-12-04T12:52:10.6787823Z http.https://github.com/.extraheader 2025-12-04T12:52:10.6818449Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-12-04T12:52:10.6831728Z http.https://github.com/.extraheader 2025-12-04T12:52:10.6849100Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-12-04T12:52:10.6876489Z http.https://github.com/.extraheader 2025-12-04T12:52:10.6907675Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-12-04T12:52:10.6925412Z http.https://github.com/.extraheader 2025-12-04T12:52:10.6968233Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-12-04T12:52:10.6986326Z http.https://github.com/.extraheader 2025-12-04T12:52:10.7011045Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-12-04T12:52:10.7037770Z http.https://github.com/.extraheader 2025-12-04T12:52:10.7079466Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-12-04T12:52:10.7092386Z http.https://github.com/.extraheader 2025-12-04T12:52:10.7110042Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-12-04T12:52:10.7122027Z http.https://github.com/.extraheader 2025-12-04T12:52:10.7138319Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp' 2025-12-04T12:52:10.7161103Z http.https://github.com/.extraheader 2025-12-04T12:52:10.7181902Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T12:52:10.7215430Z http.https://github.com/.extraheader 2025-12-04T12:52:10.7249131Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T12:52:10.7263901Z http.https://github.com/.extraheader 2025-12-04T12:52:10.7285536Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-12-04T12:52:10.7297658Z http.https://github.com/.extraheader 2025-12-04T12:52:10.7331849Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-12-04T12:52:10.7363450Z http.https://github.com/.extraheader 2025-12-04T12:52:10.7388474Z Entering 'third_party/kleidiai' 2025-12-04T12:52:10.7412434Z http.https://github.com/.extraheader 2025-12-04T12:52:10.7448610Z Entering 'third_party/mimalloc' 2025-12-04T12:52:10.7471638Z http.https://github.com/.extraheader 2025-12-04T12:52:10.7510872Z Entering 'third_party/nlohmann' 2025-12-04T12:52:10.7537455Z http.https://github.com/.extraheader 2025-12-04T12:52:10.7557372Z Entering 'third_party/onnx' 2025-12-04T12:52:10.7577124Z http.https://github.com/.extraheader 2025-12-04T12:52:10.7599280Z Entering 'third_party/onnx/third_party/pybind11' 2025-12-04T12:52:10.7619933Z http.https://github.com/.extraheader 2025-12-04T12:52:10.7649472Z Entering 'third_party/opentelemetry-cpp' 2025-12-04T12:52:10.7664343Z http.https://github.com/.extraheader 2025-12-04T12:52:10.7693088Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-12-04T12:52:10.7705753Z http.https://github.com/.extraheader 2025-12-04T12:52:10.7745385Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-12-04T12:52:10.7770328Z http.https://github.com/.extraheader 2025-12-04T12:52:10.7788591Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-12-04T12:52:10.7806037Z http.https://github.com/.extraheader 2025-12-04T12:52:10.7824424Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-12-04T12:52:10.7846633Z http.https://github.com/.extraheader 2025-12-04T12:52:10.7864280Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-12-04T12:52:10.7889446Z http.https://github.com/.extraheader 2025-12-04T12:52:10.7907525Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-12-04T12:52:10.7938367Z http.https://github.com/.extraheader 2025-12-04T12:52:10.7956018Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-12-04T12:52:10.7985287Z http.https://github.com/.extraheader 2025-12-04T12:52:10.8014218Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T12:52:10.8032213Z http.https://github.com/.extraheader 2025-12-04T12:52:10.8050444Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T12:52:10.8075562Z http.https://github.com/.extraheader 2025-12-04T12:52:10.8102782Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-12-04T12:52:10.8123705Z http.https://github.com/.extraheader 2025-12-04T12:52:10.8163134Z Entering 'third_party/pocketfft' 2025-12-04T12:52:10.8184997Z http.https://github.com/.extraheader 2025-12-04T12:52:10.8225302Z Entering 'third_party/protobuf' 2025-12-04T12:52:10.8241102Z http.https://github.com/.extraheader 2025-12-04T12:52:10.8272536Z Entering 'third_party/protobuf/third_party/benchmark' 2025-12-04T12:52:10.8294881Z http.https://github.com/.extraheader 2025-12-04T12:52:10.8313653Z Entering 'third_party/protobuf/third_party/googletest' 2025-12-04T12:52:10.8342611Z http.https://github.com/.extraheader 2025-12-04T12:52:10.8375443Z Entering 'third_party/psimd' 2025-12-04T12:52:10.8407319Z http.https://github.com/.extraheader 2025-12-04T12:52:10.8444837Z Entering 'third_party/pthreadpool' 2025-12-04T12:52:10.8464470Z http.https://github.com/.extraheader 2025-12-04T12:52:10.8485229Z Entering 'third_party/pybind11' 2025-12-04T12:52:10.8505827Z http.https://github.com/.extraheader 2025-12-04T12:52:10.8524239Z Entering 'third_party/python-peachpy' 2025-12-04T12:52:10.8539273Z http.https://github.com/.extraheader 2025-12-04T12:52:10.8555696Z Entering 'third_party/sleef' 2025-12-04T12:52:10.8568040Z http.https://github.com/.extraheader 2025-12-04T12:52:10.8584597Z Entering 'third_party/tensorpipe' 2025-12-04T12:52:10.8598898Z http.https://github.com/.extraheader 2025-12-04T12:52:10.8617422Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-12-04T12:52:10.8637199Z http.https://github.com/.extraheader 2025-12-04T12:52:10.8665189Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-12-04T12:52:10.8682257Z http.https://github.com/.extraheader 2025-12-04T12:52:10.8708867Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-12-04T12:52:10.8721133Z http.https://github.com/.extraheader 2025-12-04T12:52:10.8737537Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-12-04T12:52:10.8771694Z http.https://github.com/.extraheader 2025-12-04T12:52:10.8791473Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-12-04T12:52:10.8812439Z http.https://github.com/.extraheader 2025-12-04T12:52:10.8860066Z [command]/usr/bin/git config --local --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:10.8889986Z [command]/usr/bin/git submodule foreach --recursive git config --local --show-origin --name-only --get-regexp remote.origin.url 2025-12-04T12:52:10.9146026Z Entering 'android/libs/fbjni' 2025-12-04T12:52:10.9167355Z file:/home/runner/_work/pytorch/pytorch/.git/modules/android/libs/fbjni/config remote.origin.url 2025-12-04T12:52:10.9179102Z Entering 'third_party/FP16' 2025-12-04T12:52:10.9206331Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FP16/config remote.origin.url 2025-12-04T12:52:10.9217349Z Entering 'third_party/FXdiv' 2025-12-04T12:52:10.9238682Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FXdiv/config remote.origin.url 2025-12-04T12:52:10.9259842Z Entering 'third_party/NNPACK' 2025-12-04T12:52:10.9277774Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK/config remote.origin.url 2025-12-04T12:52:10.9293500Z Entering 'third_party/NVTX' 2025-12-04T12:52:10.9315358Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NVTX/config remote.origin.url 2025-12-04T12:52:10.9325653Z Entering 'third_party/VulkanMemoryAllocator' 2025-12-04T12:52:10.9336999Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/VulkanMemoryAllocator/config remote.origin.url 2025-12-04T12:52:10.9347849Z Entering 'third_party/XNNPACK' 2025-12-04T12:52:10.9358936Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/XNNPACK/config remote.origin.url 2025-12-04T12:52:10.9375433Z Entering 'third_party/aiter' 2025-12-04T12:52:10.9386451Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/aiter/config remote.origin.url 2025-12-04T12:52:10.9397005Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-12-04T12:52:10.9417747Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/aiter/modules/3rdparty/composable_kernel/config remote.origin.url 2025-12-04T12:52:10.9431574Z Entering 'third_party/benchmark' 2025-12-04T12:52:10.9458965Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/benchmark/config remote.origin.url 2025-12-04T12:52:10.9469736Z Entering 'third_party/composable_kernel' 2025-12-04T12:52:10.9494456Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/composable_kernel/config remote.origin.url 2025-12-04T12:52:10.9508425Z Entering 'third_party/cpp-httplib' 2025-12-04T12:52:10.9529889Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/cpp-httplib/config remote.origin.url 2025-12-04T12:52:10.9541667Z Entering 'third_party/cpuinfo' 2025-12-04T12:52:10.9567132Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/cpuinfo/config remote.origin.url 2025-12-04T12:52:10.9588573Z Entering 'third_party/cudnn_frontend' 2025-12-04T12:52:10.9601908Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/cudnn_frontend/config remote.origin.url 2025-12-04T12:52:10.9620744Z Entering 'third_party/cutlass' 2025-12-04T12:52:10.9642128Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/cutlass/config remote.origin.url 2025-12-04T12:52:10.9672650Z Entering 'third_party/fbgemm' 2025-12-04T12:52:10.9694526Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/config remote.origin.url 2025-12-04T12:52:10.9706447Z Entering 'third_party/fbgemm/external/asmjit' 2025-12-04T12:52:10.9717958Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/asmjit/config remote.origin.url 2025-12-04T12:52:10.9737258Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-12-04T12:52:10.9762168Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/composable_kernel/config remote.origin.url 2025-12-04T12:52:10.9789984Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-12-04T12:52:10.9800752Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/cpuinfo/config remote.origin.url 2025-12-04T12:52:10.9809889Z Entering 'third_party/fbgemm/external/cutlass' 2025-12-04T12:52:10.9821140Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/cutlass/config remote.origin.url 2025-12-04T12:52:10.9833166Z Entering 'third_party/fbgemm/external/googletest' 2025-12-04T12:52:10.9843760Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/googletest/config remote.origin.url 2025-12-04T12:52:10.9852107Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-12-04T12:52:10.9862551Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/hipify_torch/config remote.origin.url 2025-12-04T12:52:10.9870741Z Entering 'third_party/fbgemm/external/json' 2025-12-04T12:52:10.9889819Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/json/config remote.origin.url 2025-12-04T12:52:10.9915241Z Entering 'third_party/flash-attention' 2025-12-04T12:52:10.9936210Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/config remote.origin.url 2025-12-04T12:52:10.9960348Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-12-04T12:52:10.9971432Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/modules/csrc/composable_kernel/config remote.origin.url 2025-12-04T12:52:10.9994808Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-12-04T12:52:11.0005974Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/modules/csrc/cutlass/config remote.origin.url 2025-12-04T12:52:11.0021038Z Entering 'third_party/flatbuffers' 2025-12-04T12:52:11.0048718Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/flatbuffers/config remote.origin.url 2025-12-04T12:52:11.0073160Z Entering 'third_party/fmt' 2025-12-04T12:52:11.0099466Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fmt/config remote.origin.url 2025-12-04T12:52:11.0110213Z Entering 'third_party/gemmlowp/gemmlowp' 2025-12-04T12:52:11.0127877Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/gemmlowp/gemmlowp/config remote.origin.url 2025-12-04T12:52:11.0149239Z Entering 'third_party/gloo' 2025-12-04T12:52:11.0166513Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/gloo/config remote.origin.url 2025-12-04T12:52:11.0187626Z Entering 'third_party/googletest' 2025-12-04T12:52:11.0199873Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/googletest/config remote.origin.url 2025-12-04T12:52:11.0220378Z Entering 'third_party/ideep' 2025-12-04T12:52:11.0240051Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/config remote.origin.url 2025-12-04T12:52:11.0253128Z Entering 'third_party/ideep/mkl-dnn' 2025-12-04T12:52:11.0263869Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/modules/mkl-dnn/config remote.origin.url 2025-12-04T12:52:11.0277253Z Entering 'third_party/ittapi' 2025-12-04T12:52:11.0298869Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/ittapi/config remote.origin.url 2025-12-04T12:52:11.0309703Z Entering 'third_party/kineto' 2025-12-04T12:52:11.0329924Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/config remote.origin.url 2025-12-04T12:52:11.0350387Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-12-04T12:52:11.0373132Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/config remote.origin.url 2025-12-04T12:52:11.0394296Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-12-04T12:52:11.0406553Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/DCGM/config remote.origin.url 2025-12-04T12:52:11.0426556Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-12-04T12:52:11.0441519Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/cpr/config remote.origin.url 2025-12-04T12:52:11.0462083Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-12-04T12:52:11.0488065Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/fmt/config remote.origin.url 2025-12-04T12:52:11.0508968Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-12-04T12:52:11.0525038Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/config remote.origin.url 2025-12-04T12:52:11.0544288Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-12-04T12:52:11.0570099Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/modules/doc/config remote.origin.url 2025-12-04T12:52:11.0588817Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-12-04T12:52:11.0615406Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/glog/config remote.origin.url 2025-12-04T12:52:11.0636664Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-12-04T12:52:11.0652412Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/googletest/config remote.origin.url 2025-12-04T12:52:11.0661275Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-12-04T12:52:11.0679688Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/json/config remote.origin.url 2025-12-04T12:52:11.0689104Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-12-04T12:52:11.0699595Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/pfs/config remote.origin.url 2025-12-04T12:52:11.0708747Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp' 2025-12-04T12:52:11.0720527Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/prometheus-cpp/config remote.origin.url 2025-12-04T12:52:11.0729157Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T12:52:11.0755806Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/prometheus-cpp/modules/civetweb/config remote.origin.url 2025-12-04T12:52:11.0774112Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T12:52:11.0801256Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/prometheus-cpp/modules/googletest/config remote.origin.url 2025-12-04T12:52:11.0826915Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-12-04T12:52:11.0838870Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/fmt/config remote.origin.url 2025-12-04T12:52:11.0860779Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-12-04T12:52:11.0883365Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/googletest/config remote.origin.url 2025-12-04T12:52:11.0895086Z Entering 'third_party/kleidiai' 2025-12-04T12:52:11.0923421Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kleidiai/config remote.origin.url 2025-12-04T12:52:11.0940052Z Entering 'third_party/mimalloc' 2025-12-04T12:52:11.0962414Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/mimalloc/config remote.origin.url 2025-12-04T12:52:11.0974547Z Entering 'third_party/nlohmann' 2025-12-04T12:52:11.0987831Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/nlohmann/config remote.origin.url 2025-12-04T12:52:11.1012397Z Entering 'third_party/onnx' 2025-12-04T12:52:11.1034698Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/config remote.origin.url 2025-12-04T12:52:11.1074787Z Entering 'third_party/onnx/third_party/pybind11' 2025-12-04T12:52:11.1102431Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/modules/third_party/pybind11/config remote.origin.url 2025-12-04T12:52:11.1116363Z Entering 'third_party/opentelemetry-cpp' 2025-12-04T12:52:11.1145337Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/config remote.origin.url 2025-12-04T12:52:11.1173926Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-12-04T12:52:11.1194518Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/benchmark/config remote.origin.url 2025-12-04T12:52:11.1204715Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-12-04T12:52:11.1225987Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/googletest/config remote.origin.url 2025-12-04T12:52:11.1243016Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-12-04T12:52:11.1262162Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/ms-gsl/config remote.origin.url 2025-12-04T12:52:11.1283862Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-12-04T12:52:11.1304025Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/nlohmann-json/config remote.origin.url 2025-12-04T12:52:11.1316434Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-12-04T12:52:11.1328277Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/opentelemetry-proto/config remote.origin.url 2025-12-04T12:52:11.1337441Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-12-04T12:52:11.1364815Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/opentracing-cpp/config remote.origin.url 2025-12-04T12:52:11.1375526Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-12-04T12:52:11.1392278Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/config remote.origin.url 2025-12-04T12:52:11.1402635Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T12:52:11.1432750Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/modules/civetweb/config remote.origin.url 2025-12-04T12:52:11.1441523Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T12:52:11.1462633Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/modules/googletest/config remote.origin.url 2025-12-04T12:52:11.1482616Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-12-04T12:52:11.1503883Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/tools/vcpkg/config remote.origin.url 2025-12-04T12:52:11.1525913Z Entering 'third_party/pocketfft' 2025-12-04T12:52:11.1552256Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/pocketfft/config remote.origin.url 2025-12-04T12:52:11.1563837Z Entering 'third_party/protobuf' 2025-12-04T12:52:11.1576498Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/config remote.origin.url 2025-12-04T12:52:11.1588559Z Entering 'third_party/protobuf/third_party/benchmark' 2025-12-04T12:52:11.1613291Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/benchmark/config remote.origin.url 2025-12-04T12:52:11.1623711Z Entering 'third_party/protobuf/third_party/googletest' 2025-12-04T12:52:11.1635635Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/googletest/config remote.origin.url 2025-12-04T12:52:11.1647493Z Entering 'third_party/psimd' 2025-12-04T12:52:11.1662897Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/psimd/config remote.origin.url 2025-12-04T12:52:11.1682803Z Entering 'third_party/pthreadpool' 2025-12-04T12:52:11.1711105Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/pthreadpool/config remote.origin.url 2025-12-04T12:52:11.1723461Z Entering 'third_party/pybind11' 2025-12-04T12:52:11.1745407Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/pybind11/config remote.origin.url 2025-12-04T12:52:11.1756432Z Entering 'third_party/python-peachpy' 2025-12-04T12:52:11.1774599Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/python-peachpy/config remote.origin.url 2025-12-04T12:52:11.1796339Z Entering 'third_party/sleef' 2025-12-04T12:52:11.1822535Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/sleef/config remote.origin.url 2025-12-04T12:52:11.1834366Z Entering 'third_party/tensorpipe' 2025-12-04T12:52:11.1852730Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/config remote.origin.url 2025-12-04T12:52:11.1863784Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-12-04T12:52:11.1887952Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/googletest/config remote.origin.url 2025-12-04T12:52:11.1898027Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-12-04T12:52:11.1924251Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libnop/config remote.origin.url 2025-12-04T12:52:11.1934688Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-12-04T12:52:11.1955078Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libuv/config remote.origin.url 2025-12-04T12:52:11.1966259Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-12-04T12:52:11.1985308Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/config remote.origin.url 2025-12-04T12:52:11.2005005Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-12-04T12:52:11.2020352Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/modules/tools/clang/config remote.origin.url 2025-12-04T12:52:11.2060533Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/android/libs/fbjni/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2090780Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FP16/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2108460Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FXdiv/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2137270Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2162246Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/NVTX/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2179585Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/VulkanMemoryAllocator/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2207074Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/XNNPACK/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2223834Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/aiter/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2246222Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/aiter/modules/3rdparty/composable_kernel/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2272992Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/benchmark/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2291298Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/composable_kernel/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2318153Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/cpp-httplib/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2345279Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/cpuinfo/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2361151Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/cudnn_frontend/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2377088Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/cutlass/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2392789Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2408850Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/asmjit/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2435366Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/composable_kernel/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2458836Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/cpuinfo/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2476612Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/cutlass/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2498943Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2515660Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/hipify_torch/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2531700Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/json/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2546490Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2569863Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/modules/csrc/composable_kernel/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2585817Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/modules/csrc/cutlass/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2609968Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/flatbuffers/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2636302Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fmt/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2653008Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/gemmlowp/gemmlowp/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2668498Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/gloo/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2684271Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2703484Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2728427Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/modules/mkl-dnn/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2746316Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/ittapi/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2762070Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2789973Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2806466Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/DCGM/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2823480Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/cpr/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2851554Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/fmt/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2879209Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2896336Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/modules/doc/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2923725Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/glog/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2940560Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2965297Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/json/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.2992281Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/pfs/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3021954Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/prometheus-cpp/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3038937Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/prometheus-cpp/modules/civetweb/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3062745Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/prometheus-cpp/modules/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3090929Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/fmt/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3107809Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3132285Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kleidiai/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3158689Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/mimalloc/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3184386Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/nlohmann/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3202631Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3219110Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/modules/third_party/pybind11/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3234650Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3252438Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/benchmark/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3268445Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3292090Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/ms-gsl/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3318864Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/nlohmann-json/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3335779Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/opentelemetry-proto/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3353120Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/opentracing-cpp/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3389592Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3419498Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/modules/civetweb/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3437218Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/modules/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3459387Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/tools/vcpkg/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3476841Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/pocketfft/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3503843Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3530344Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/benchmark/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3547603Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3564631Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/psimd/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3582071Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/pthreadpool/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3598946Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/pybind11/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3618351Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/python-peachpy/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3636276Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/sleef/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3653549Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3669994Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3687115Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libnop/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3715603Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libuv/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3733862Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3751249Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/modules/tools/clang/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:11.3939064Z Post job cleanup. 2025-12-04T12:52:11.4404354Z [command]/usr/bin/git version 2025-12-04T12:52:11.4431850Z git version 2.52.0 2025-12-04T12:52:11.4452042Z Copying '/home/runner/.gitconfig' to '/home/runner/_work/_temp/32905867-0f4f-4266-99d5-63d542ad946e/.gitconfig' 2025-12-04T12:52:11.4458305Z Temporarily overriding HOME='/home/runner/_work/_temp/32905867-0f4f-4266-99d5-63d542ad946e' before making global git config changes 2025-12-04T12:52:11.4459394Z Adding repository directory to the temporary git global config as a safe directory 2025-12-04T12:52:11.4460574Z [command]/usr/bin/git config --global --add safe.directory /home/runner/_work/pytorch/pytorch 2025-12-04T12:52:11.4502818Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand 2025-12-04T12:52:11.4538098Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :" 2025-12-04T12:52:11.4787909Z Entering 'android/libs/fbjni' 2025-12-04T12:52:11.4844248Z Entering 'third_party/FP16' 2025-12-04T12:52:11.4883077Z Entering 'third_party/FXdiv' 2025-12-04T12:52:11.4927591Z Entering 'third_party/NNPACK' 2025-12-04T12:52:11.4957791Z Entering 'third_party/NVTX' 2025-12-04T12:52:11.4983408Z Entering 'third_party/VulkanMemoryAllocator' 2025-12-04T12:52:11.5007687Z Entering 'third_party/XNNPACK' 2025-12-04T12:52:11.5045545Z Entering 'third_party/aiter' 2025-12-04T12:52:11.5086753Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-12-04T12:52:11.5121908Z Entering 'third_party/benchmark' 2025-12-04T12:52:11.5163703Z Entering 'third_party/composable_kernel' 2025-12-04T12:52:11.5200812Z Entering 'third_party/cpp-httplib' 2025-12-04T12:52:11.5224866Z Entering 'third_party/cpuinfo' 2025-12-04T12:52:11.5247024Z Entering 'third_party/cudnn_frontend' 2025-12-04T12:52:11.5273003Z Entering 'third_party/cutlass' 2025-12-04T12:52:11.5301204Z Entering 'third_party/fbgemm' 2025-12-04T12:52:11.5326892Z Entering 'third_party/fbgemm/external/asmjit' 2025-12-04T12:52:11.5363058Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-12-04T12:52:11.5404173Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-12-04T12:52:11.5456890Z Entering 'third_party/fbgemm/external/cutlass' 2025-12-04T12:52:11.5506579Z Entering 'third_party/fbgemm/external/googletest' 2025-12-04T12:52:11.5544541Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-12-04T12:52:11.5575430Z Entering 'third_party/fbgemm/external/json' 2025-12-04T12:52:11.5598732Z Entering 'third_party/flash-attention' 2025-12-04T12:52:11.5628893Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-12-04T12:52:11.5666701Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-12-04T12:52:11.5693853Z Entering 'third_party/flatbuffers' 2025-12-04T12:52:11.5718419Z Entering 'third_party/fmt' 2025-12-04T12:52:11.5739773Z Entering 'third_party/gemmlowp/gemmlowp' 2025-12-04T12:52:11.5761405Z Entering 'third_party/gloo' 2025-12-04T12:52:11.5785077Z Entering 'third_party/googletest' 2025-12-04T12:52:11.5820067Z Entering 'third_party/ideep' 2025-12-04T12:52:11.5843966Z Entering 'third_party/ideep/mkl-dnn' 2025-12-04T12:52:11.5869095Z Entering 'third_party/ittapi' 2025-12-04T12:52:11.5892434Z Entering 'third_party/kineto' 2025-12-04T12:52:11.5915258Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-12-04T12:52:11.5935595Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-12-04T12:52:11.5961327Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-12-04T12:52:11.5987200Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-12-04T12:52:11.6010316Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-12-04T12:52:11.6030543Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-12-04T12:52:11.6057390Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-12-04T12:52:11.6077682Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-12-04T12:52:11.6096916Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-12-04T12:52:11.6117157Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-12-04T12:52:11.6164464Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp' 2025-12-04T12:52:11.6191181Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T12:52:11.6213421Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T12:52:11.6236081Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-12-04T12:52:11.6256813Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-12-04T12:52:11.6290195Z Entering 'third_party/kleidiai' 2025-12-04T12:52:11.6345296Z Entering 'third_party/mimalloc' 2025-12-04T12:52:11.6368967Z Entering 'third_party/nlohmann' 2025-12-04T12:52:11.6394161Z Entering 'third_party/onnx' 2025-12-04T12:52:11.6423728Z Entering 'third_party/onnx/third_party/pybind11' 2025-12-04T12:52:11.6448087Z Entering 'third_party/opentelemetry-cpp' 2025-12-04T12:52:11.6472647Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-12-04T12:52:11.6520169Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-12-04T12:52:11.6548302Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-12-04T12:52:11.6568672Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-12-04T12:52:11.6588715Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-12-04T12:52:11.6620814Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-12-04T12:52:11.6643226Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-12-04T12:52:11.6689683Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T12:52:11.6712978Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T12:52:11.6779588Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-12-04T12:52:11.6830970Z Entering 'third_party/pocketfft' 2025-12-04T12:52:11.6854925Z Entering 'third_party/protobuf' 2025-12-04T12:52:11.6884897Z Entering 'third_party/protobuf/third_party/benchmark' 2025-12-04T12:52:11.6908005Z Entering 'third_party/protobuf/third_party/googletest' 2025-12-04T12:52:11.6955510Z Entering 'third_party/psimd' 2025-12-04T12:52:11.6984620Z Entering 'third_party/pthreadpool' 2025-12-04T12:52:11.7008223Z Entering 'third_party/pybind11' 2025-12-04T12:52:11.7029395Z Entering 'third_party/python-peachpy' 2025-12-04T12:52:11.7051664Z Entering 'third_party/sleef' 2025-12-04T12:52:11.7076142Z Entering 'third_party/tensorpipe' 2025-12-04T12:52:11.7117767Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-12-04T12:52:11.7150991Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-12-04T12:52:11.7202949Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-12-04T12:52:11.7239421Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-12-04T12:52:11.7282735Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-12-04T12:52:11.7345892Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader 2025-12-04T12:52:11.7378582Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || :" 2025-12-04T12:52:11.7573832Z Entering 'android/libs/fbjni' 2025-12-04T12:52:11.7600419Z Entering 'third_party/FP16' 2025-12-04T12:52:11.7630771Z Entering 'third_party/FXdiv' 2025-12-04T12:52:11.7652954Z Entering 'third_party/NNPACK' 2025-12-04T12:52:11.7675968Z Entering 'third_party/NVTX' 2025-12-04T12:52:11.7707703Z Entering 'third_party/VulkanMemoryAllocator' 2025-12-04T12:52:11.7729301Z Entering 'third_party/XNNPACK' 2025-12-04T12:52:11.7768333Z Entering 'third_party/aiter' 2025-12-04T12:52:11.7792003Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-12-04T12:52:11.7817116Z Entering 'third_party/benchmark' 2025-12-04T12:52:11.7846566Z Entering 'third_party/composable_kernel' 2025-12-04T12:52:11.7878107Z Entering 'third_party/cpp-httplib' 2025-12-04T12:52:11.7906681Z Entering 'third_party/cpuinfo' 2025-12-04T12:52:11.7929803Z Entering 'third_party/cudnn_frontend' 2025-12-04T12:52:11.7951696Z Entering 'third_party/cutlass' 2025-12-04T12:52:11.7978720Z Entering 'third_party/fbgemm' 2025-12-04T12:52:11.8015015Z Entering 'third_party/fbgemm/external/asmjit' 2025-12-04T12:52:11.8052373Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-12-04T12:52:11.8089069Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-12-04T12:52:11.8112453Z Entering 'third_party/fbgemm/external/cutlass' 2025-12-04T12:52:11.8144945Z Entering 'third_party/fbgemm/external/googletest' 2025-12-04T12:52:11.8165207Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-12-04T12:52:11.8205829Z Entering 'third_party/fbgemm/external/json' 2025-12-04T12:52:11.8266426Z Entering 'third_party/flash-attention' 2025-12-04T12:52:11.8292428Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-12-04T12:52:11.8333810Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-12-04T12:52:11.8381499Z Entering 'third_party/flatbuffers' 2025-12-04T12:52:11.8426047Z Entering 'third_party/fmt' 2025-12-04T12:52:11.8449018Z Entering 'third_party/gemmlowp/gemmlowp' 2025-12-04T12:52:11.8471722Z Entering 'third_party/gloo' 2025-12-04T12:52:11.8499721Z Entering 'third_party/googletest' 2025-12-04T12:52:11.8521369Z Entering 'third_party/ideep' 2025-12-04T12:52:11.8553509Z Entering 'third_party/ideep/mkl-dnn' 2025-12-04T12:52:11.8590449Z Entering 'third_party/ittapi' 2025-12-04T12:52:11.8613113Z Entering 'third_party/kineto' 2025-12-04T12:52:11.8635438Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-12-04T12:52:11.8656610Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-12-04T12:52:11.8701858Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-12-04T12:52:11.8749545Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-12-04T12:52:11.8785962Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-12-04T12:52:11.8809926Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-12-04T12:52:11.8834246Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-12-04T12:52:11.8855736Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-12-04T12:52:11.8891335Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-12-04T12:52:11.8932153Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-12-04T12:52:11.8993134Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp' 2025-12-04T12:52:11.9034347Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T12:52:11.9076702Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T12:52:11.9127227Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-12-04T12:52:11.9165733Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-12-04T12:52:11.9192865Z Entering 'third_party/kleidiai' 2025-12-04T12:52:11.9231353Z Entering 'third_party/mimalloc' 2025-12-04T12:52:11.9276979Z Entering 'third_party/nlohmann' 2025-12-04T12:52:11.9306095Z Entering 'third_party/onnx' 2025-12-04T12:52:11.9334376Z Entering 'third_party/onnx/third_party/pybind11' 2025-12-04T12:52:11.9374554Z Entering 'third_party/opentelemetry-cpp' 2025-12-04T12:52:11.9398832Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-12-04T12:52:11.9434633Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-12-04T12:52:11.9456870Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-12-04T12:52:11.9476582Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-12-04T12:52:11.9521099Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-12-04T12:52:11.9555767Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-12-04T12:52:11.9586704Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-12-04T12:52:11.9616209Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T12:52:11.9652061Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T12:52:11.9687664Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-12-04T12:52:11.9734602Z Entering 'third_party/pocketfft' 2025-12-04T12:52:11.9759414Z Entering 'third_party/protobuf' 2025-12-04T12:52:11.9784057Z Entering 'third_party/protobuf/third_party/benchmark' 2025-12-04T12:52:11.9815715Z Entering 'third_party/protobuf/third_party/googletest' 2025-12-04T12:52:11.9838767Z Entering 'third_party/psimd' 2025-12-04T12:52:11.9879135Z Entering 'third_party/pthreadpool' 2025-12-04T12:52:11.9905426Z Entering 'third_party/pybind11' 2025-12-04T12:52:11.9943881Z Entering 'third_party/python-peachpy' 2025-12-04T12:52:11.9965804Z Entering 'third_party/sleef' 2025-12-04T12:52:11.9987559Z Entering 'third_party/tensorpipe' 2025-12-04T12:52:12.0014059Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-12-04T12:52:12.0035624Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-12-04T12:52:12.0062998Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-12-04T12:52:12.0083857Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-12-04T12:52:12.0104510Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-12-04T12:52:12.0160301Z [command]/usr/bin/git config --local --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.0194783Z [command]/usr/bin/git submodule foreach --recursive git config --local --show-origin --name-only --get-regexp remote.origin.url 2025-12-04T12:52:12.0415354Z Entering 'android/libs/fbjni' 2025-12-04T12:52:12.0435446Z file:/home/runner/_work/pytorch/pytorch/.git/modules/android/libs/fbjni/config remote.origin.url 2025-12-04T12:52:12.0444953Z Entering 'third_party/FP16' 2025-12-04T12:52:12.0458389Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FP16/config remote.origin.url 2025-12-04T12:52:12.0468266Z Entering 'third_party/FXdiv' 2025-12-04T12:52:12.0480888Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FXdiv/config remote.origin.url 2025-12-04T12:52:12.0488204Z Entering 'third_party/NNPACK' 2025-12-04T12:52:12.0499593Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK/config remote.origin.url 2025-12-04T12:52:12.0509325Z Entering 'third_party/NVTX' 2025-12-04T12:52:12.0529775Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NVTX/config remote.origin.url 2025-12-04T12:52:12.0540518Z Entering 'third_party/VulkanMemoryAllocator' 2025-12-04T12:52:12.0561516Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/VulkanMemoryAllocator/config remote.origin.url 2025-12-04T12:52:12.0582846Z Entering 'third_party/XNNPACK' 2025-12-04T12:52:12.0594378Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/XNNPACK/config remote.origin.url 2025-12-04T12:52:12.0611195Z Entering 'third_party/aiter' 2025-12-04T12:52:12.0627770Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/aiter/config remote.origin.url 2025-12-04T12:52:12.0638484Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-12-04T12:52:12.0654049Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/aiter/modules/3rdparty/composable_kernel/config remote.origin.url 2025-12-04T12:52:12.0679298Z Entering 'third_party/benchmark' 2025-12-04T12:52:12.0701307Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/benchmark/config remote.origin.url 2025-12-04T12:52:12.0714508Z Entering 'third_party/composable_kernel' 2025-12-04T12:52:12.0725603Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/composable_kernel/config remote.origin.url 2025-12-04T12:52:12.0737936Z Entering 'third_party/cpp-httplib' 2025-12-04T12:52:12.0749529Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/cpp-httplib/config remote.origin.url 2025-12-04T12:52:12.0759130Z Entering 'third_party/cpuinfo' 2025-12-04T12:52:12.0771980Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/cpuinfo/config remote.origin.url 2025-12-04T12:52:12.0792248Z Entering 'third_party/cudnn_frontend' 2025-12-04T12:52:12.0814372Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/cudnn_frontend/config remote.origin.url 2025-12-04T12:52:12.0835610Z Entering 'third_party/cutlass' 2025-12-04T12:52:12.0848802Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/cutlass/config remote.origin.url 2025-12-04T12:52:12.0864433Z Entering 'third_party/fbgemm' 2025-12-04T12:52:12.0875352Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/config remote.origin.url 2025-12-04T12:52:12.0890285Z Entering 'third_party/fbgemm/external/asmjit' 2025-12-04T12:52:12.0904441Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/asmjit/config remote.origin.url 2025-12-04T12:52:12.0915845Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-12-04T12:52:12.0926300Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/composable_kernel/config remote.origin.url 2025-12-04T12:52:12.0938245Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-12-04T12:52:12.0956924Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/cpuinfo/config remote.origin.url 2025-12-04T12:52:12.0965664Z Entering 'third_party/fbgemm/external/cutlass' 2025-12-04T12:52:12.0983764Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/cutlass/config remote.origin.url 2025-12-04T12:52:12.0996890Z Entering 'third_party/fbgemm/external/googletest' 2025-12-04T12:52:12.1007759Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/googletest/config remote.origin.url 2025-12-04T12:52:12.1016213Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-12-04T12:52:12.1026904Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/hipify_torch/config remote.origin.url 2025-12-04T12:52:12.1036750Z Entering 'third_party/fbgemm/external/json' 2025-12-04T12:52:12.1055540Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/json/config remote.origin.url 2025-12-04T12:52:12.1068385Z Entering 'third_party/flash-attention' 2025-12-04T12:52:12.1087081Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/config remote.origin.url 2025-12-04T12:52:12.1097453Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-12-04T12:52:12.1106606Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/modules/csrc/composable_kernel/config remote.origin.url 2025-12-04T12:52:12.1121174Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-12-04T12:52:12.1131735Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/modules/csrc/cutlass/config remote.origin.url 2025-12-04T12:52:12.1148195Z Entering 'third_party/flatbuffers' 2025-12-04T12:52:12.1167565Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/flatbuffers/config remote.origin.url 2025-12-04T12:52:12.1179411Z Entering 'third_party/fmt' 2025-12-04T12:52:12.1190445Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/fmt/config remote.origin.url 2025-12-04T12:52:12.1200056Z Entering 'third_party/gemmlowp/gemmlowp' 2025-12-04T12:52:12.1216197Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/gemmlowp/gemmlowp/config remote.origin.url 2025-12-04T12:52:12.1225645Z Entering 'third_party/gloo' 2025-12-04T12:52:12.1235659Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/gloo/config remote.origin.url 2025-12-04T12:52:12.1246439Z Entering 'third_party/googletest' 2025-12-04T12:52:12.1257357Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/googletest/config remote.origin.url 2025-12-04T12:52:12.1277004Z Entering 'third_party/ideep' 2025-12-04T12:52:12.1289064Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/config remote.origin.url 2025-12-04T12:52:12.1298868Z Entering 'third_party/ideep/mkl-dnn' 2025-12-04T12:52:12.1309398Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/modules/mkl-dnn/config remote.origin.url 2025-12-04T12:52:12.1321775Z Entering 'third_party/ittapi' 2025-12-04T12:52:12.1333034Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/ittapi/config remote.origin.url 2025-12-04T12:52:12.1343792Z Entering 'third_party/kineto' 2025-12-04T12:52:12.1354664Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/config remote.origin.url 2025-12-04T12:52:12.1364211Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-12-04T12:52:12.1373523Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/config remote.origin.url 2025-12-04T12:52:12.1394309Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-12-04T12:52:12.1412921Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/DCGM/config remote.origin.url 2025-12-04T12:52:12.1422642Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-12-04T12:52:12.1439127Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/cpr/config remote.origin.url 2025-12-04T12:52:12.1448099Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-12-04T12:52:12.1480859Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/fmt/config remote.origin.url 2025-12-04T12:52:12.1497197Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-12-04T12:52:12.1507710Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/config remote.origin.url 2025-12-04T12:52:12.1515915Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-12-04T12:52:12.1540269Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/modules/doc/config remote.origin.url 2025-12-04T12:52:12.1550011Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-12-04T12:52:12.1583223Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/glog/config remote.origin.url 2025-12-04T12:52:12.1597860Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-12-04T12:52:12.1613954Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/googletest/config remote.origin.url 2025-12-04T12:52:12.1623205Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-12-04T12:52:12.1642336Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/json/config remote.origin.url 2025-12-04T12:52:12.1660095Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-12-04T12:52:12.1687738Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/pfs/config remote.origin.url 2025-12-04T12:52:12.1698284Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp' 2025-12-04T12:52:12.1723242Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/prometheus-cpp/config remote.origin.url 2025-12-04T12:52:12.1739039Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T12:52:12.1758702Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/prometheus-cpp/modules/civetweb/config remote.origin.url 2025-12-04T12:52:12.1770684Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T12:52:12.1797312Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/prometheus-cpp/modules/googletest/config remote.origin.url 2025-12-04T12:52:12.1811244Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-12-04T12:52:12.1820591Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/fmt/config remote.origin.url 2025-12-04T12:52:12.1828057Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-12-04T12:52:12.1838314Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/googletest/config remote.origin.url 2025-12-04T12:52:12.1847795Z Entering 'third_party/kleidiai' 2025-12-04T12:52:12.1858134Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/kleidiai/config remote.origin.url 2025-12-04T12:52:12.1867830Z Entering 'third_party/mimalloc' 2025-12-04T12:52:12.1887674Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/mimalloc/config remote.origin.url 2025-12-04T12:52:12.1907541Z Entering 'third_party/nlohmann' 2025-12-04T12:52:12.1924693Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/nlohmann/config remote.origin.url 2025-12-04T12:52:12.1934765Z Entering 'third_party/onnx' 2025-12-04T12:52:12.1945878Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/config remote.origin.url 2025-12-04T12:52:12.1972056Z Entering 'third_party/onnx/third_party/pybind11' 2025-12-04T12:52:12.1987357Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/modules/third_party/pybind11/config remote.origin.url 2025-12-04T12:52:12.1998864Z Entering 'third_party/opentelemetry-cpp' 2025-12-04T12:52:12.2010175Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/config remote.origin.url 2025-12-04T12:52:12.2020030Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-12-04T12:52:12.2038573Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/benchmark/config remote.origin.url 2025-12-04T12:52:12.2049516Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-12-04T12:52:12.2064914Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/googletest/config remote.origin.url 2025-12-04T12:52:12.2074318Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-12-04T12:52:12.2102300Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/ms-gsl/config remote.origin.url 2025-12-04T12:52:12.2119835Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-12-04T12:52:12.2147377Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/nlohmann-json/config remote.origin.url 2025-12-04T12:52:12.2168005Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-12-04T12:52:12.2198175Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/opentelemetry-proto/config remote.origin.url 2025-12-04T12:52:12.2207511Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-12-04T12:52:12.2223543Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/opentracing-cpp/config remote.origin.url 2025-12-04T12:52:12.2234536Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-12-04T12:52:12.2255519Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/config remote.origin.url 2025-12-04T12:52:12.2267235Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-12-04T12:52:12.2288738Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/modules/civetweb/config remote.origin.url 2025-12-04T12:52:12.2300108Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-12-04T12:52:12.2335029Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/modules/googletest/config remote.origin.url 2025-12-04T12:52:12.2357049Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-12-04T12:52:12.2368849Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/tools/vcpkg/config remote.origin.url 2025-12-04T12:52:12.2397390Z Entering 'third_party/pocketfft' 2025-12-04T12:52:12.2412946Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/pocketfft/config remote.origin.url 2025-12-04T12:52:12.2431594Z Entering 'third_party/protobuf' 2025-12-04T12:52:12.2451781Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/config remote.origin.url 2025-12-04T12:52:12.2462507Z Entering 'third_party/protobuf/third_party/benchmark' 2025-12-04T12:52:12.2476892Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/benchmark/config remote.origin.url 2025-12-04T12:52:12.2485132Z Entering 'third_party/protobuf/third_party/googletest' 2025-12-04T12:52:12.2504059Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/googletest/config remote.origin.url 2025-12-04T12:52:12.2515834Z Entering 'third_party/psimd' 2025-12-04T12:52:12.2526723Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/psimd/config remote.origin.url 2025-12-04T12:52:12.2535845Z Entering 'third_party/pthreadpool' 2025-12-04T12:52:12.2546748Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/pthreadpool/config remote.origin.url 2025-12-04T12:52:12.2555102Z Entering 'third_party/pybind11' 2025-12-04T12:52:12.2565227Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/pybind11/config remote.origin.url 2025-12-04T12:52:12.2584356Z Entering 'third_party/python-peachpy' 2025-12-04T12:52:12.2602078Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/python-peachpy/config remote.origin.url 2025-12-04T12:52:12.2611749Z Entering 'third_party/sleef' 2025-12-04T12:52:12.2622417Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/sleef/config remote.origin.url 2025-12-04T12:52:12.2631248Z Entering 'third_party/tensorpipe' 2025-12-04T12:52:12.2641525Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/config remote.origin.url 2025-12-04T12:52:12.2651638Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-12-04T12:52:12.2670253Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/googletest/config remote.origin.url 2025-12-04T12:52:12.2679789Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-12-04T12:52:12.2689729Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libnop/config remote.origin.url 2025-12-04T12:52:12.2696885Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-12-04T12:52:12.2706220Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libuv/config remote.origin.url 2025-12-04T12:52:12.2724141Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-12-04T12:52:12.2734188Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/config remote.origin.url 2025-12-04T12:52:12.2742246Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-12-04T12:52:12.2770190Z file:/home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/modules/tools/clang/config remote.origin.url 2025-12-04T12:52:12.2812093Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/android/libs/fbjni/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.2857049Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FP16/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.2899662Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FXdiv/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.2923474Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.2964179Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/NVTX/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.3004110Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/VulkanMemoryAllocator/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.3040582Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/XNNPACK/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.3080076Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/aiter/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.3117798Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/aiter/modules/3rdparty/composable_kernel/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.3154660Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/benchmark/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.3192439Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/composable_kernel/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.3219902Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/cpp-httplib/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.3246424Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/cpuinfo/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.3282647Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/cudnn_frontend/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.3317857Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/cutlass/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.3351238Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.3388528Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/asmjit/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.3423521Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/composable_kernel/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.3459652Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/cpuinfo/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.3483867Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/cutlass/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.3518447Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.3557388Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/hipify_torch/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.3593002Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/json/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.3618653Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.3655567Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/modules/csrc/composable_kernel/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.3683640Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/modules/csrc/cutlass/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.3713557Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/flatbuffers/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.3748433Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/fmt/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.3785719Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/gemmlowp/gemmlowp/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.3812199Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/gloo/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.3837126Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.3870294Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.3900465Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/modules/mkl-dnn/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.3937434Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/ittapi/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.3971588Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.4007946Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.4035501Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/DCGM/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.4062139Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/cpr/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.4088919Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/fmt/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.4130019Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.4168774Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/modules/doc/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.4198904Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/glog/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.4236266Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.4275261Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/json/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.4301599Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/pfs/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.4333782Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/prometheus-cpp/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.4369557Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/prometheus-cpp/modules/civetweb/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.4396179Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/prometheus-cpp/modules/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.4428615Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/fmt/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.4464596Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.4499162Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/kleidiai/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.4524875Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/mimalloc/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.4550305Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/nlohmann/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.4576031Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.4612205Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/modules/third_party/pybind11/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.4638192Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.4676710Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/benchmark/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.4711759Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.4739117Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/ms-gsl/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.4771617Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/nlohmann-json/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.4807386Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/opentelemetry-proto/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.4834261Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/opentracing-cpp/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.4869623Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.4898187Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/modules/civetweb/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.4936634Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/modules/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.4977180Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/tools/vcpkg/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.5014676Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/pocketfft/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.5052273Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.5080060Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/benchmark/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.5106093Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.5132540Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/psimd/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.5159470Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/pthreadpool/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.5196784Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/pybind11/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.5230141Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/python-peachpy/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.5263007Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/sleef/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.5288704Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.5325421Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/googletest/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.5349963Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libnop/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.5387611Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libuv/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.5419011Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.5456462Z [command]/usr/bin/git config --file /home/runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/modules/tools/clang/config --name-only --get-regexp ^includeIf\.gitdir: 2025-12-04T12:52:12.5683064Z Cleaning up orphan processes